forum_id
stringlengths 10
10
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
76
| forum_abstract
stringlengths 1
3.52k
| forum_pdf_url
stringlengths 0
49
| note_id
stringlengths 10
10
| note_type
stringclasses 6
values | note_created
int64 1,697B
1,737B
| note_replyto
stringlengths 10
10
| note_readers
sequencelengths 1
6
| note_signatures
sequencelengths 1
1
| note_text
stringlengths 10
45k
|
---|---|---|---|---|---|---|---|---|---|---|---|
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | uP676dsarr | official_review | 1,698,824,961,127 | U0P622bfUN | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Reviewer_HdAm"
] | summary: This work introduces a novel federated learning framework called Federated Generative Learning, which addresses the inefficiency and privacy issues of existing solutions that transmit features, parameters, or gradients between clients and servers. In this framework, clients generate text prompts tailored to their local data and send them to the server, where informative training data is synthesized using stable diffusion. This approach offers enhanced communication efficiency, significant performance gains, and improved privacy protection, as demonstrated through extensive experiments on ImageNet and DomainNet datasets.
soundness: 3 good
presentation: 3 good
contribution: 3 good
strengths: - This work proposes a novel learning framework to train local data without accessing the raw data directly.
- communication of prompts instead of model parameters addresses several issues of existing federated learning frameworks; high communication cost and potential privacy threats by attackers.
weaknesses: - The proposed method may be highly dependent on the performance of both diffusion models and visual-captioning models.
- An ablation study of varying the foundation models is needed.
- In a similar vein, the local training dataset should be unseen for pertaining foundation models and should be more difficult than ImageNet which is a standard image classification dataset. As mentioned in the Introduction section, the local training data are more likely to be privacy sensitive, so they are more likely to be unseen or not contained for pre-training foundation models such as BLIPv2 and Stable Diffusion. Evaluation on ImageNet or DomainNet implicitly uses the assumption that local data have a similar or subset domain to the pretraining dataset of foundation models, which are publically accessible or have no privacy issue.
- Clients in federated learning are often assumed to have limited capacity in memory or computation. Generating prompts using a large visual captioning model in each client is impractical.
questions: - The quality of synthetic data could be highly different according to domain discrepancy between the local training data and the pretraining data for the foundation model. Instead of using standard image classification datasets, does the proposed method work for federated learning on fine-grained classification such as CUB-200, Cars, and medical image datasets?
flag_for_ethics_review: ['No ethics review needed.']
rating: 6: marginally above the acceptance threshold
confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
code_of_conduct: Yes |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | rBIpJVtNpZ | official_review | 1,698,700,659,171 | U0P622bfUN | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Reviewer_wnsW"
] | summary: - The main idea of the paper is to use prompts to “summarize” the client-side data in federated learning. These prompts are then sent to the central server and fed to a foundation generative model, with the hope that the generated data distribution is close to the client data distribution.
- With this idea, federated learning can be made one-round or few-round to drastically reduce communication costs, where clients can just send over the prompts one-shot to the server as the prompts and labels require very little communication.
- The paper then evaluates on several natural image datasets (subsets from ImageNet) and show that the proposed technique can match FedAvg in performance.
- The paper also performs some privacy analysis and shows that by transmitting prompts instead gradients/model updates/data, the membership inference attack success drops significantly.
soundness: 2 fair
presentation: 4 excellent
contribution: 2 fair
strengths: - The proposed approach is interesting and novel to my understanding. Assuming the client data distributions can be well captured by the foundation generative model, the proposed technique can clear benefits in simplicity and reducing communication costs.
- Putting aside the underlying assumptions of the proposed techniques (see weaknesses), the paper is overall well-executed in terms of the diversity of the experiments and visualizations.
- The paper is generally well-written and easy-to-follow.
weaknesses: [W1] The main weakness of the proposed method is the underlying assumption that client data can, in fact, be generated by foundational models. This sound obvious but is key to the applicability of the proposed approach in practice. To put it bluntly, is the proposed solution searching for a problem?
1. Settings where FL is helpful—such as medical images across hospitals [1], user-generated text across mobile phones [2]—are often where the data distributions aren’t covered by the pre-training data of foundational models. The datasets used by the experiments are all natural image datasets (ImageNette, ImageFruit, etc.), which can be well-represented in the pre-training dataset of foundation generative models. I would appreciate results on non-natural image datasets.
2. In particular, if we consider horizontal FL settings (as with the paper), the server may even know about the possible classes / labels (e.g. federating binary classifiers) without communicating to the clients, in which case the “class-level prompts” may not be needed at all since the server can just generate images by itself.
[W2] More broadly, the threat model of the paper may need to be defined more clearly.
- What exactly is client privacy in this case? Can the client data be still considered “private” if you could already generate them with public foundation models (see also [3])? Does the privacy of the data lie in the pixels, or simply the description of the pixels?
- In many cases, the descriptions of the images can already be leaking privacy. If we apply the proposed method to cross-device federated learning on user’s photo data, the server could already learn a lot about the user data distribution and preferences. For example, following Sec 5.4 and Figure 6, knowing that a user have lots of golf photos (without knowing the pixels of the photos) already allows the FL service provider (e.g. Google) to sell targeted ads.
[1] FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in Realistic Healthcare Settings. NeurIPS 2022 Datasets and Benchmark. https://arxiv.org/abs/2210.04620
[2] https://research.google/pubs/pub47586/
[3] Considerations for Differentially Private Learning with Large-Scale Public Pretraining. https://arxiv.org/pdf/2212.06470.pdf
questions: - [Intro section] Why exactly does the proposed method provide robustness to data heterogeneity? Heterogeneity can still surface in the (instance-level) client prompts and subsequently the generated images.
- Minor comment: consider using different citation commands `\citet` , `\cite`, etc. in LaTeX to make the formatting of the in-text references consistent.
flag_for_ethics_review: ['Yes, Privacy, security and safety']
rating: 5: marginally below the acceptance threshold
confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
code_of_conduct: Yes |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | VulmbS3YYc | official_review | 1,698,549,593,720 | U0P622bfUN | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Reviewer_ZNb2"
] | summary: The paper addresses efficiency and client-shift issues in federated learning by harnessing generative foundation models. Unlike traditional approaches that communicate model parameters, this work exploits clients to send instance-level or class-level prompts, generated by a pre-trained captioning model, to the server. The server aggregates these prompts to produce a proxy dataset via a pre-trained generative model, enabling standard federated learning on this dataset. The server then dispatches the refined weights back to the clients. Empirical evaluations underscore the efficacy of the proposed approach.
soundness: 2 fair
presentation: 2 fair
contribution: 2 fair
strengths: 1. The proposed approach significantly reduces communication costs compared to traditional parameter transmission.
2. By leveraging foundation models to synthesize proxy data, the authors effectively mitigate the client-shift problem.
3. A variety of experimental settings across four datasets demonstrate the robustness and effectiveness of the proposed method.
weaknesses: 1. The training framework is predominantly tailored for image datasets, limiting its applicability.
2. The method heavily depends on the congruence between the captioning and generative models, making it challenging to ensure the proxy dataset's distribution aligns with the private data.
3. The experimental setup, with only five clients, may not adequately represent real-world scenarios; expanding the evaluation to include 50 or 100 clients could provide more insightful results.
4. The comparison to a single baseline, FedAvg, falls short; including comparisons to advanced Federated Learning frameworks could better highlight the proposed method's effectiveness.
5. Table 2 shows the proposed method outperforming centralized learning significantly; a thorough explanation of this phenomenon is warranted.
questions: 1. I wonder if the approach cam be applied to other types of datasets, besides the image datasets.
2. What the experimental results would be when the number of clients becomes bigger, e.g., 100.
flag_for_ethics_review: ['No ethics review needed.']
rating: 5: marginally below the acceptance threshold
confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
code_of_conduct: Yes |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | 2ZurMVHvCB | official_review | 1,698,543,827,436 | U0P622bfUN | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Reviewer_zu8q"
] | summary: The Federated Generative Learning (FGL) framework offers a novel approach to federated learning, leveraging foundational generative models like Stable Diffusion to generate training data from prompts shared by clients. Clients contribute class-level or instance-level prompts, encapsulating key features of their local data. The server, in turn, amalgamates these prompts and synthesizes corresponding training data for global model training. This approach trims down communication costs since only concise prompts, and not bulky gradients or models, are transferred. This system also boasts robustness to data diversity and has demonstrated superior performance – with just one communication round, it outdid FedAvg's 200 rounds in accuracy. When trialed on skewed ImageNet100 distributions, FGL exceeded FedAvg's performance by 30% in just five communication rounds. Apart from being efficient, FGL also enhances privacy, as prompts reveal lesser private data than traditional methods. Evaluations confirmed no private data memorization in the synthetic images and an enhanced resilience against membership inference attacks. However, challenges persist with non-IID data, intricate domains, and the potential risks associated with prompts.
soundness: 2 fair
presentation: 3 good
contribution: 2 fair
strengths: 1. Novel idea of using foundation models to synthesize training data for federated learning, enabling low communication costs and better privacy.
2. Compelling experimental results demonstrating accuracy improvements over traditional FedAvg, especially with skewed data distributions.
3. Thorough analysis and quantification of privacy benefits, showing reduced memorization and vulnerability to membership inference attacks.
weaknesses: 1. The evaluation of the Federated Generative Learning (FGL) framework is limited to simpler domains like ImageNet and doesn't extend to other areas, casting doubt on whether prompts can encapsulate complexity.
2. While FGL aids in data generation for non-IID data, achieving congruence with a global distribution is yet to be addressed.
3. Security risks of prompts require more analysis. Could prompts be reverse-engineered to obtain private data?
4. The framework hasn't been benchmarked against other federated learning methods that employ generative models.
questions: please refer to the weakness
flag_for_ethics_review: ['No ethics review needed.']
rating: 5: marginally below the acceptance threshold
confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
code_of_conduct: Yes |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | YhuEd25YHs | official_comment | 1,700,233,875,070 | uP676dsarr | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Authors"
] | title: Response [1 / 2]
comment: We thank Reviewer HdAm for your valuable feedback and constructive comments. We have carefully answered all your questions and added extra experiment results in the following.
**Q1: The proposed method may be highly dependent on the performance of both diffusion models and visual-captioning models. An ablation study of varying the foundation models is needed.**
**A1**: Thanks for this insightful point. To investigate the impact of various generative models on the results, we followed the setting in [1]. Our experiments primarily focus on three prevalent conditional diffusion models: DiT[2], GLIDE[3], and Stable Diffusion. We use these off-the-shelf models to generate synthetic images. Specifically, for GLIDE and Stable Diffusion, the prompt was configured as "a photo of {label name}, real-world images, high resolution." For DiT, the input comprised the label ID corresponding to the ImageNet1k dataset. The images synthesized by DiT and GLIDE are of dimensions 256x256, whereas those produced by Stable Diffusion are of dimensions 512x512. As shown in the following table, even when we vary the foundation models used in our method, FGL consistently outperforms FedAvg by a significant margin. This observation serves as evidence for the generality of our approach. We have added these results in the Appendix A.4.3.
| Method | one-shot | 5-round, $\beta$=0.01 | 5-round, $\beta$=0.5 | IID |
|:-------------------------:|:--------:|:------------------:|:-----------------:|:--------:|
| Ours w/ Stable Diffusion | **85.2** | **82.8** | **94.1** | **95.6** |
| Ours w/ Glide | 79.0 | 76.2 | 89.4 | 89.4 |
| Ours w/ Dit | 76.2 | 74.6 | 90.2 | 92.8 |
| FedAvg (120-round) | - | 51.6 | 75.1 | 79.2 |
[1] Li, Zheng, et al. "Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?." arXiv preprint arXiv:2305.12954 (2023).
[2] Nichol, Alex, et al. "Glide: Towards photorealistic image generation and editing with text-guided diffusion models." arXiv preprint arXiv:2112.10741 (2021).
[3]Peebles, William, and Saining Xie. "Scalable diffusion models with transformers." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
**Q2: Clients in federated learning are often assumed to have limited capacity in memory or computation.**
**A2**:
- First, compared to FedAvg, our method introduces only one additional operation on the client side, i.e., prompt generation, which involves only forward propagation and does not impose significant computational costs. All computational operations are executed on the server side during the initial communication. The server trains a model with an excellent initial state, and subsequently, the client performs regular model updates, which means no additional cost compared with FedAvg.
- Secondly, our method is particularly well-suited for cross-silo FL, where the clients represent organizations or companies. In this context, the number of clients is typically small, but they possess substantial computational resources. Furthermore, this scenario emphasizes the importance of protecting clients' local data from potential leaks, which constitutes a significant contribution of our approach towards preserving privacy. |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | 5pE0TtLbRY | official_comment | 1,700,233,964,709 | uP676dsarr | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Authors"
] | title: Response [2 / 2]
comment: **Q3.Evaluation on ImageNet or DomainNet implicitly uses the assumption that local data have a similar or subset domain to the pretraining dataset of foundation models, which are publically accessible or have no privacy issue.**
**A3**: Thanks for pointing this out. Here, we would like to address this from three aspects:
- Although the pretraining dataset and private dataset may have some domain similarities (e.g., both may contain common real-world scenes), the tasks of lmageSquawk (fine-grained bird classification) and QuickDraw (non-realistic domain) in DomainNet that we show in our experiments are challenging. It is a non-trivial task to train a model only using synthetic data generated by foundation models to achieve good accuracy on the ImageNet or DomainNet test sets.
- Furthermore, even when there are some domain similarities between the pretraining dataset and private dataset, does it mean that there is no need to discuss the privacy risk? Definitely not! Let's consider a scenario where a public dataset contains various images of cats, while a private dataset contains personal images of cats belonging to individual users. Although both datasets involve images of cats, the private dataset may contain users' personal information, such as their family photos or addresses. Therefore, even if the two datasets are similar in some aspects, the private data still carries privacy risks and needs to be properly protected. Taking Membership Inference Attack (MIA) as an example, consider an adversary that wants to probe a ML model to test membership of an individual's data in the model's training data. In this scenario, an adversary is more likely to have access to some representative images of the target individual, but not necessarily the ones used for training the model. As shown in Figure 8, we implemented the state-of-the-art LiRA algorithm in MIA. The experimental results demonstrate that our approach ensures the protection of sensitive information of the members in the clients' data (since the model training process has never been exposed to any private data). In contrast, traditional federated learning methods directly train on private data, posing a high risk of exposing the sensitive information of the members in the clients' data (i.e., for certain private data samples, attackers have a high confidence in identifying the client from which the sample originates). To the best of our knowledge, prior to our proposed approach, no one has put forth a training paradigm that effectively defends against LiRA while concurrently modeling utility (i.e., achieving high test accuracy).
- Finally, even for particularly challenging domains such as remote sensing images or fine-grained classification datasets, our method can easily adapt to these scenarios. We conducted experiments on several fine-grained image classification datasets, namely CUB-200, Stanford Cars, and also the satellite image dataset EuroSAT. CUB-200 is a challenging dataset consisting of 200 bird species, while Stanford Cars contains 16,185 images belonging to 196 classes of cars. See more details in the `Appendix A.4.2`.
| | | | | | || | | | |
|:---------------:|:-------------:|:-------:|:------------:|:-----------:|:------------:|:---------------:|:--------------:|:-----------:|:-----:|:-----------:|
| Prompt type | Training type | Dataset | FedAvg,$\beta=0.01$ |FedAvg, $\beta=0.5$ | FedAvg (IID) | Ours (one-shot) | Ours (5-round) ,$\beta=0.01$ |Ours (5-round),$\beta=0.5$ | Ours (5-round),IID | Centralized |
| instance | scratch | CUB-200 | 35.04 | 36.61 | 36.62 | 44.17 | 64.53 | 69.19 | 71.01 | 48.31 |
| instance | pretrain | CUB-200 | 78.98 | 79.08 | 78.48 | 54.02 | 75.13 | 78.96 | 80.72 | 81.77 |
| class | scratch | CUB-200 | 35.04 | 36.61 | 36.62 | 45.34 | 67.66 | 71.9 | 73.33 | 48.32 |
| class | pretrain | CUB-200 | 78.98 | 79.08 | 78.48 | 52.73 | 74.68 | 78.7 | 80.32 | 81.77 |
| class | scratch | Cars | 55.18 | 42.43 | 44.48 | 54.48 | 83.31 | 87.22 | 88.07 | 64.72 |
| class | pretrain | Cars | 87.71 | 88.91 | 88.96 | 60.55 | 87.31 | 90.05 | 90.73 | 91.21 |
| class | scratch | EuroSAT | 43.94 | 74.48 | 84.87 | 38.37 | 37.59 | 82.94 | 91.01 | 94.31 |
**Q4: does the proposed method work for federated learning on fine-grained classification such as CUB-200, Cars, and medical image datasets?**
**A4**: Please refer to the table in A3. |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | fAydmWtaGN | official_comment | 1,700,234,743,826 | rBIpJVtNpZ | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Authors"
] | title: Response to wnsW [1/2]
comment: We thank Reviewer wnsW for the valuable feedback and insightful comments. Here, we answer your questions and provide more experimental evidence.
**Q1: The datasets used by the experiments are all natural image datasets (ImageNette, ImageFruit, etc.), which can be well-represented in the pre-training dataset of foundation generative models. I would appreciate results on non-natural image datasets.**
**A1:** Although the pretraining dataset and the private dataset may exhibit some domain similarities (e.g., both may contain common real-world scenes), the tasks of lmageSquawk (fine-grained bird classification) and QuickDraw (non-realistic domain) in DomainNet that we demonstrate in our experiments are inherently challenging. Training a model solely on synthetic data generated by foundation models to achieve high accuracy on the ImageNet or DomainNet test sets is a non-trivial task.
To further validate the effectiveness of our method, we conducted experiments on several fine-grained image classification datasets, including CUB, Cars, and the satellite image dataset EuroSAT. As the official EuroSAT dataset did not provide predefined training and testing splits, we performed a split in an 8:2 ratio. The size of fine-grained recognition datasets is typically smaller compared to general image classification datasets. In previous work, a common practice is to utilize a pretrained model that has been trained on the ImageNet dataset. In this study, we present two approaches: training the model from scratch and loading a pretrained ResNet34 model. As shown in the table, our method achieves excellent performance even in these challenging domains. Additionally, in the cross-silo federated learning scenario, when clients have strong computational capabilities, one can simply finetune the foundation models on these domains, achieving better performance than normal federated learning methods. We have added these results in the appendix.
| | | | | | || | | | |
|:---------------:|:-------------:|:-------:|:------------:|:-----------:|:------------:|:---------------:|:--------------:|:-----------:|:-----:|:-----------:|
| Prompt type | Training type | Dataset | FedAvg,$\beta=0.01$ |FedAvg, $\beta=0.5$ | FedAvg (IID) | Ours (one-shot) | Ours (5-round) ,$\beta=0.01$ |Ours (5-round),$\beta=0.5$ | Ours (5-round),IID | Centralized |
| instance | scratch | CUB-200 | 35.04 | 36.61 | 36.62 | 44.17 | 64.53 | 69.19 | 71.01 | 48.31 |
| instance | pretrain | CUB-200 | 78.98 | 79.08 | 78.48 | 54.02 | 75.13 | 78.96 | 80.72 | 81.77 |
| class | scratch | CUB-200 | 35.04 | 36.61 | 36.62 | 45.34 | 67.66 | 71.9 | 73.33 | 48.32 |
| class | pretrain | CUB-200 | 78.98 | 79.08 | 78.48 | 52.73 | 74.68 | 78.7 | 80.32 | 81.77 |
| class | scratch | Cars | 55.18 | 42.43 | 44.48 | 54.48 | 83.31 | 87.22 | 88.07 | 64.72 |
| class | pretrain | Cars | 87.71 | 88.91 | 88.96 | 60.55 | 87.31 | 90.05 | 90.73 | 91.21 |
| class | scratch | EuroSAT | 43.94 | 74.48 | 84.87 | 38.37 | 37.59 | 82.94 | 91.01 | 94.31 |
**Q2: the “class-level prompts” may not be needed at all since the server can just generate images by itself.**
**A2**: Yes, if the server-side has knowledge of the specific labels for the classification task, it can generate them directly. However, class-level is just a simple case. We propose the instance-level approach to address more complex domains, where client-side customized prompt generation is more advantageous in improving the performance of the overall model.
**Q3: More broadly, the threat model of the paper may need to be defined more clearly.**
**A3**: threat model: In traditional federated learning schemes that transmit model parameters/gradients, attackers can launch various attacks once they obtain the parameters/gradients, such as membership inference attacks, adversarial example attacks, and model inversion. In contrast, our approach significantly reduces potential security and privacy risks because users only transmit prompts in the first round of communication. To the best of our knowledge, there is no research indicating that using prompts alone can perfectly reconstruct private data. Therefore, our approach is more secure and privacy-preserving compared to FedAvg. |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | ZKUIxB4P6Z | official_comment | 1,700,234,788,720 | rBIpJVtNpZ | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Authors"
] | title: Response to wnsW [2/2]
comment: **Q4: What exactly is client privacy in this case? Can the client data be still considered “private” if you could already generate them with public foundation models.**
**A4**: client privacy: In this paper, similar to differential privacy, we primarily focus on individual privacy, as it is more challenging for attackers. For instance, Attack A perfectly targets a known subset of 0.1% of users in a client, but succeeds with a random 50% chance on the rest. Attack B succeeds with a 50.05% probability on any given user in a client. On average, these two attacks have the same attack success rate. However, the second attack is practically useless, while the first attack is much more powerful in the real-world. This is precisely what LiRA[1] emphasizes, as it evaluates the privacy attack by computing their true-positive rate at very low (e.g., ≤ 0.1%) false-positive rates (as illustrated in our experimental results in Figure 8), demonstrating that our method can better defend against privacy attacks.
[1] Carlini, Nicholas, et al. "Membership inference attacks from first principles." 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022.
**Q5: In many cases, the descriptions of the images can already be leaking privacy.**
**A5**:This could be the difference between individual privacy and group privacy. The majority of the current papers on data protection focuses on the individual ‘user,’ or ‘data subject,’ who’s right to privacy will grow exponentially with the enforcement of the GDPR (General Data Protection Regulation). However, group privacy is not mentioned in the GDPR , which is not well-defined. Also, if the server knows some of the user data distribution means potential privacy risks, our proposed method does not introduce additional risk in this regard. This is because in traditional gradient/parameter-based methods, using model inversion, the server can still infer this information[1,2]. But for individual privacy, this information won't increase the leakage of membership in private data.
[1] Geiping, Jonas, et al. "Inverting gradients-how easy is it to break privacy in federated learning?." Advances in Neural Information Processing Systems 33 (2020): 16937-16947.
[2] Hatamizadeh, Ali, et al. "Do gradient inversion attacks make federated learning unsafe?." IEEE Transactions on Medical Imaging (2023).
**Q6:Why exactly does the proposed method provide robustness to data heterogeneity? Heterogeneity can still surface in the (instance-level) client prompts and subsequently the generated images.**
**A6**: For one-shot Federated Learning (FL), regardless of the extreme data distributions among different clients, the server can always collect prompts corresponding to all the data, thus obtaining a balanced synthetic dataset on the server. Therefore, compared to FedAvg, our method is not sensitive to data heterogeneity in the first round of communication. In the subsequent model updates, the clients are still affected by non-IID data. However, due to the well-trained initial model obtained in the first round of communication, only a few rounds of communication are needed for local updates, making it more robust to data heterogeneity. As shown in Table 1 in the main text, our method exhibits significantly smaller gaps compared to FedAvg under different non-IID scenarios.
**Q7: Minor comment: consider using different citation commands \citet , \cite**
**A7:** Thanks for pointing this out. we will check it in the updated version. |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | 9l46wbVYfG | official_comment | 1,700,235,412,781 | VulmbS3YYc | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Authors"
] | title: Response to Reviewer ZNb2 [1/2]
comment: Thank you for your valuable time in reviewing our paper. Below are responses to your concerns. Please let us know if you require any further information, or if anything is unclear.
**Q1: The training framework is predominantly tailored for image datasets, limiting its applicability.**
**A1**: Our approach is based on existing generative models that are widely used in various domains, such as Computer Vision with Stable Diffusion and Natural Language Processing with GPTs. This means that our framework can easily be applied to other domains, including NLP. However, due to time constraints, we were unable to conduct additional experiments on NLP tasks. We believe that further research in this area would be valuable and should be pursued in the future.
**Q2: The method heavily depends on the congruence between the captioning and generative models, making it challenging to ensure the proxy dataset's distribution aligns with the private data.**
**A2**: To further validate the effectiveness of our method, we conducted experiments on several fine-grained image classification datasets, namely CUB-200, Stanford Cars, and also the satellite image dataset EuroSAT. CUB-200 is a challenging dataset consisting of 200 bird species, while Stanford Cars contains 16,185 images belonging to 196 classes of cars. As for EuroSAT, the official dataset did not provide predefined training and testing splits, so we performed a split in an 8:2 ratio. The size of fine-grained recognition datasets is typically smaller compared to general image classification datasets. In previous work, a common practice is to utilize a pretrained model that has been trained on the ImageNet dataset. In this study, we present two approaches: training the model from scratch and loading a pretrained ResNet34 model.
As shown in the table, our method achieves excellent performance even in these challenging domains. This can be attributed to the fact that regardless of the magnitude of domain differences, pretraining a well-performing model on our synthetic data is beneficial for the downstream federated tasks.
| | | | | | || | | | |
|:---------------:|:-------------:|:-------:|:------------:|:-----------:|:------------:|:---------------:|:--------------:|:-----------:|:-----:|:-----------:|
| Prompt type | Training type | Dataset | FedAvg,$\beta=0.01$ |FedAvg, $\beta=0.5$ | FedAvg (IID) | Ours (one-shot) | Ours (5-round) ,$\beta=0.01$ |Ours (5-round),$\beta=0.5$ | Ours (5-round),IID | Centralized |
| instance | scratch | CUB-200 | 35.04 | 36.61 | 36.62 | 44.17 | 64.53 | 69.19 | 71.01 | 48.31 |
| instance | pretrain | CUB-200 | 78.98 | 79.08 | 78.48 | 54.02 | 75.13 | 78.96 | 80.72 | 81.77 |
| class | scratch | CUB-200 | 35.04 | 36.61 | 36.62 | 45.34 | 67.66 | 71.9 | 73.33 | 48.32 |
| class | pretrain | CUB-200 | 78.98 | 79.08 | 78.48 | 52.73 | 74.68 | 78.7 | 80.32 | 81.77 |
| class | scratch | Cars | 55.18 | 42.43 | 44.48 | 54.48 | 83.31 | 87.22 | 88.07 | 64.72 |
| class | pretrain | Cars | 87.71 | 88.91 | 88.96 | 60.55 | 87.31 | 90.05 | 90.73 | 91.21 |
| class | scratch | EuroSAT | 43.94 | 74.48 | 84.87 | 38.37 | 37.59 | 82.94 | 91.01 | 94.31 |
**Q3: The experimental setup, with only five clients, may not adequately represent real-world scenarios; expanding the evaluation to include 50 or 100 clients could provide more insightful results.**
**A3**:Thanks for your suggestion. We extended our analysis to include the results obtained from the ImageNette dataset with 50 and 100 clients. As depicted in the table, our method continues to exhibit superior performance compared to FedAvg across all scenarios. Additionally, the improvements achieved by our method remain significant. See more details in the Appendix.
| # Client | FedAvg, $\beta$=0.5 | FedAvg, IID | Ours (one-shot) | Ours (5-round), $\beta$=0.5 | Ours (5-round), IID | Centralized |
|:--------:|:----------------:|:-----------:|-----------------|:------------------------:|:-------------------:|-------------|
| 5 | 75.0 | 79.2 | 85.2 | 94.0 | 95.6 | 92.2 |
| 50 | 72.1 | 77.0 | 85.2 | 93.8 | 91.2 | 92.2 |
| 100 | 70.1 | 67.2 | 85.2 | 92.8 | 93.2 | 92.2 | |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | kIQR51kLqb | official_comment | 1,700,235,455,263 | VulmbS3YYc | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Authors"
] | title: Response to Reviewer ZNb2 [2/2]
comment: **Q4: The comparison to a single baseline, FedAvg, falls short; including comparisons to advanced Federated Learning frameworks could better highlight the proposed method's effectiveness.**
**A4**: We have compared the two popular FL methods, Moon[1] and Fedopt[2]. We conducted experiments on the ImageNette and ImageNet100 datasets, considering a scenario with 50 clients under non-IID settings (beta=0.5). To the best of our knowledge, there is currently no federated learning method that surpasses centralized training. However, our proposed method even outperforms centralized trained models in many scenarios (see Table 1 in main text). Therefore, as shown in this table, our method still outperforms other federated learning approaches.
| Method | FedAvg | FedOpt | Moon | Ours (one-shot) | Ours (5-round) |
|:----------------------:|:------:|:------:|:-----:|:---------------:|:--------------:|
| ImageNette (beta=0.5) | 72.01 | 73.21 | 74.27 | 85.21 | 93.80 |
| ImageNet100 (beta=0.5) | 40.13 | 41.25 | 41.43 | 48.31 | 72.67 |
[1]Li, Qinbin, Bingsheng He, and Dawn Song. "Model-contrastive federated learning." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.
[2]Reddi, Sashank J., et al. "Adaptive Federated Optimization." International Conference on Learning Representations. 2020.
**Q5: Table 2 shows the proposed method outperforming centralized learning significantly; a thorough explanation of this phenomenon is warranted.**
**A5**: This is because our method synthesizes a balanced dataset using all collected prompts during the first round of communication. We then pretrain a "well-initialized" model on this dataset. Once we have this well-initialized model, several rounds of communication can quickly bring the model to a good performance. In the first table, we present the results of directly loading a pretrained model on ImageNet. It can be observed that directly loading a pretrained model on ImageNet reduces the gap between our method and FedAvg. This is because the pretrained model provides a good starting point. However, training a pretrained model on ImageNet requires a significant computational cost on a dataset of 1.3M samples. In contrast, our method only requires training on a small amount of synthesized data to provide a well-initialized model, hence achieving better performance than models trained in a centralized manner. |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | mofNXJXZ39 | official_comment | 1,700,236,063,921 | 2ZurMVHvCB | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Authors"
] | title: Response to Reviewer zu8q [1/2]
comment: Thank you for your constructive comments. We hope the following clarifications can address your concerns.
**Q1: The evaluation of the Federated Generative Learning (FGL) framework is limited to simpler domains like ImageNet and doesn't extend to other areas, casting doubt on whether prompts can encapsulate complexity.**
**A1**: To further validate the effectiveness of our method, we conducted experiments on several fine-grained image classification datasets, namely CUB-200, Stanford Cars, and also the satellite image dataset EuroSAT. CUB-200 is a challenging dataset consisting of 200 bird species, while Stanford Cars contains 16,185 images belonging to 196 classes of cars. As for EuroSAT, the official dataset did not provide predefined training and testing splits, so we performed a split in an 8:2 ratio. The size of fine-grained recognition datasets is typically smaller compared to general image classification datasets. In previous work, a common practice is to utilize a pretrained model that has been trained on the ImageNet dataset. In this study, we present two approaches: training the model from scratch and loading a pretrained ResNet34 model.
As shown in the table, our method achieves excellent performance even in these challenging domains. This can be attributed to the fact that regardless of the magnitude of domain differences, pretraining a well-performing model on our synthetic data is beneficial for the downstream federated tasks. We have added these results in the appendix.
| | | | | | || | | | |
|:---------------:|:-------------:|:-------:|:------------:|:-----------:|:------------:|:---------------:|:--------------:|:-----------:|:-----:|:-----------:|
| Prompt type | Training type | Dataset | FedAvg,$\beta=0.01$ |FedAvg, $\beta=0.5$ | FedAvg (IID) | Ours (one-shot) | Ours (5-round) ,$\beta=0.01$ |Ours (5-round),$\beta=0.5$ | Ours (5-round),IID | Centralized |
| instance | scratch | CUB-200 | 35.04 | 36.61 | 36.62 | 44.17 | 64.53 | 69.19 | 71.01 | 48.31 |
| instance | pretrain | CUB-200 | 78.98 | 79.08 | 78.48 | 54.02 | 75.13 | 78.96 | 80.72 | 81.77 |
| class | scratch | CUB-200 | 35.04 | 36.61 | 36.62 | 45.34 | 67.66 | 71.9 | 73.33 | 48.32 |
| class | pretrain | CUB-200 | 78.98 | 79.08 | 78.48 | 52.73 | 74.68 | 78.7 | 80.32 | 81.77 |
| class | scratch | Cars | 55.18 | 42.43 | 44.48 | 54.48 | 83.31 | 87.22 | 88.07 | 64.72 |
| class | pretrain | Cars | 87.71 | 88.91 | 88.96 | 60.55 | 87.31 | 90.05 | 90.73 | 91.21 |
| class | scratch | EuroSAT | 43.94 | 74.48 | 84.87 | 38.37 | 37.59 | 82.94 | 91.01 | 94.31 |
**Q2: While FGL aids in data generation for non-IID data, achieving congruence with a global distribution is yet to be addressed.**
**A2**: Thank you for your valuable feedback. We acknowledge that FGL is effective in generating data for non-IID scenarios, aligning it with a global distribution (e.g., IID settings) also works in our experiments (see Table 1 for IID results).
**Q3: Security risks of prompts require more analysis. Could prompts be reverse-engineered to obtain private data?**
**A3**: During the communication phase, traditional Federated Learning (FL) methods typically transmit model parameters or gradients. However, these parameters can be vulnerable to adversarial attacks and model inversion attacks if intercepted by an adversary. To enhance security, some FL methods utilize prompts for communication. However, the potential for attackers to reconstruct private data using prompts has received limited research attention, both in black-box and white-box scenarios.
Recent work[1,2] has identified risks associated with the reconstruction of pretrained data in Diffusion models. Nevertheless, there is currently no available method that can solely reconstruct previously unseen private data in Diffusion models based solely on prompts. This presents an interesting and promising research direction for future investigations.
Consequently, considering the lack of research in this area, our method can be regarded as relatively safe and privacy-preserving.
[1]Shen, Xinyue, et al. "Prompt Stealing Attacks Against Text-to-Image Generation Models." arXiv preprint arXiv:2302.09923 (2023).
[2]Carlini, Nicolas, et al. "Extracting training data from diffusion models." 32nd USENIX Security Symposium (USENIX Security 23). 2023. |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | DMdGpc6aeM | official_comment | 1,700,236,100,046 | 2ZurMVHvCB | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Authors"
] | title: Response to Reviewer zu8q [2/2]
comment: **Q4: The framework hasn't been benchmarked against other federated learning methods that employ generative models.**
**A4**: Unfortunately, we were unable to find any existing methods in the literature that directly address our specific setting, making it difficult to perform a fair comparison. In light of this, and following the suggestions of other reviewers, we conducted experiments using various types of generative models to demonstrate the applicability of our proposed method. To investigate the impact of various generative models on the results, we followed the setting in [1]. Our experiments primarily focus on three prevalent conditional diffusion models: DiT[2], GLIDE[3], and Stable Diffusion. We use these off-the-shelf models to generate synthetic images. Specifically, for GLIDE and Stable Diffusion, the prompt was configured as "a photo of {label name}, real-world images, high resolution." For DiT, the input comprised the label ID corresponding to the ImageNet1k dataset. The images synthesized by DiT and GLIDE are of dimensions 256x256, whereas those produced by Stable Diffusion are of dimensions 512x512. As shown in the following table, even when we vary the foundation models used in our method, FGL consistently outperforms FedAvg by a significant margin. This observation serves as evidence for the generality of our approach. We have added these results in the appendix.
| Method | one-shot | 5-round, beta=0.01 | 5-round, beta=0.5 | IID |
|:-------------------------:|:--------:|:------------------:|:-----------------:|:--------:|
| Ours w/ Stable Diffusion | **85.2** | **82.8** | **94.1** | **95.6** |
| Ours w/ Glide | 79.0 | 76.2 | 89.4 | 89.4 |
| Ours w/ Dit | 76.2 | 74.6 | 90.2 | 92.8 |
| FedAvg (120-round) | - | 51.6 | 75.1 | 79.2 |
[1] Li, Zheng, et al. "Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?." arXiv preprint arXiv:2305.12954 (2023).
[2] Nichol, Alex, et al. "Glide: Towards photorealistic image generation and editing with text-guided diffusion models." arXiv preprint arXiv:2112.10741 (2021).
[3]Peebles, William, and Saining Xie. "Scalable diffusion models with transformers." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | dFtawOvszo | official_comment | 1,700,527,427,254 | rBIpJVtNpZ | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Reviewer_wnsW"
] | title: Response to author rebuttal
comment: ###
I appreciate the authors for providing a rebuttal.
- A1: I appreciate the authors for putting efforts into the new results. I also appreciate pointing to the results on QuickDraw. However, my concern is not fully addressed since the datasets “CUB-200” (natural images of birds) and “Cars” (natural images of cars) are very much still in-distribution for the pre-trained generative vision models.
- A2: The authors responded to my question by pointing to the use of instance-level prompts, but this didn’t quite address my concern that the significance of the class-level prompts is a bit overclaimed. Considering the default implementation of your experiments uses class-level prompts (page 6), I would suggest clearly spelling out the assumptions and weaknesses of class-level prompts in the updated paper.
- A4/A5:
- (For clarity, the following discussions apply to “instance-level” prompts)
- By explaining the LiRA paper on membership inference in A4, the authors imply that the paper cares about instance-level privacy — i.e. image-level privacy, where an attacker cannot confidently tell whether one image is or isn’t used for training.
- I’m definitely okay with the **privacy granularity** in this case; what I’m uncertain about (with Q5) is whether **all the information contain within a single example (i.e. image-label pair)** is protected.
- A5 does not quite address my question. I do not agree that this is the difference between “group privacy” vs “individual privacy”; rather it is that the instance-level prompts have provided **side channels into learning about the information of a single image.**
- Consider running local, image-level DP-SGD on a client when participating in a vanilla FedAvg task. All the information corresponding to a single example (pixel values and labels) are protected behind the “privacy barrier” since privatized gradients are applied to the model. In contrast, instance-level prompts would leak information about the pixel values, and thus do not really satisfy instance-level privacy in the sense of “attacker not being able to tell whether an image is used for training”. I do acknowledge however that there is value in providing empirical privacy of the pixel values.
- A6: Thanks for the clarification that the server can select/curate prompts to essentially manually mitigate the data heterogeneity. I would suggest highlighting this in the updated version.
Overall, the technique proposed in the paper is interesting, though I feel the assumptions on client data distributions and privacy claims are too strong. Having read through other reviewers’ comments, I’m keeping my rating at 5. |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | A2zDYe7XQ6 | official_comment | 1,700,539,395,845 | dFtawOvszo | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Authors"
] | title: Response
comment: Thank you for your valuable time.
A1: So `you just ignore our results on QuickDraw and EuroSAT`, which also performs much better compared with traditional FL methods and refute your claim.
A2: Thanks for your suggestion. The default implementation of our experiments utilizes class-level prompts, as they have shown to provide sufficiently good performance and effectively protect privacy. The choice of training method depends on the specific use case. It is important to note that there is no perfect approach in data privacy, as no method can guarantee zero information leakage while achieving a perfect model (`no free lunch in privacy`).
A4/A5:
- `Can you find any method in FL that outperforms our method and efficiently defends against LiRA attack` (the most powerful membership inference attack)? To the best of our knowledge, no other method has been shown to achieve such performance.
- I understand your concern about potential risks associated with prompts. Consider the following: `Can you reconstruct any private data using only these prompts`? This task is extraordinarily challenging, even in the complete white-box setting, where the prompt cannot perfectly reconstruct the private data. Not to mention, our scenario generation model has never been trained on private data.
A6: Thank you for acknowledging the robustness of our method in handling non-IID data.
We aim to motivate researchers to consider the effective integration of foundation models for downstream tasks in federated learning through our approach. Additionally, we encourage researchers to explore and identify viable attack strategies to demonstrate the method's potential lack of privacy preservation, either **theoretically or experimentally**. |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | 6JeFYeADhe | official_comment | 1,700,670,736,076 | 2ZurMVHvCB | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Authors"
] | title: With the hope that our response addresses your concerns
comment: Dear Reviewer zu8q,
As the discussion period is closing, we sincerely look forward to your feedback. The authors deeply appreciate your valuable time and efforts spent reviewing this paper and helping us improve it.
Please also let us know if there are further questions or comments about this paper. We strive to improve the paper consistently, and it is our pleasure to have your feedback!
Best regards,
Authors |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | KddjiVwGeM | meta_review | 1,701,786,518,795 | U0P622bfUN | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Area_Chair_8sJH"
] | metareview: This paper presented an interesting approach to federated learning that doesn't require each client to send the parameters or the gradients to the server but only sends a text prompt describing the client data. The server can then synthesize training data using the received prompts and train.
While the reviewers appreciated the idea, a number of concerns were raised. Some of these were related to the fact the datasets used in the experiments could have been present in the pre-training dataset of the foundation model, and applicability of the method to datasets more sophisticated than ImageNet.
The authors provided a detailed rebuttal and the paper was discussed. However, apart from one reviewer who marginally leaned towards acceptance (though still had some concerns), the other three reviewers maintained their original assessment and their concerns remained.
In the end, after taking into account the reviews, the discussion, and my own reading of the paper, the paper falls short of the acceptance threshold. Although the authors did respond to the reviewers' concerns with additional experimental results, the paper in its current form still does not seem ready for publication. It is advised that the authors properly incorporate the concerns raised in the reviews and submit the work at another venue.
justification_for_why_not_higher_score: The concerns of the reviewers still persisted after the author response and discussion
justification_for_why_not_lower_score: N/A |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | GgDNjyva5t | decision | 1,705,406,011,852 | U0P622bfUN | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Reject |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | eRhNDltiUV | official_comment | 1,700,670,517,901 | GpT9FR36vo | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Authors"
] | title: Thanks for Response
comment: Thank you for your response!
1. Since our method focuses on using foundation models in FL, it should be relatively easy to adapt to NLP tasks as well, where foundation models are also used for data synthesis [1,2].
[1] Yue, Xiang, et al. "Synthetic text generation with differential privacy: A simple and practical recipe." arXiv preprint arXiv:2210.14348 (2022).
[2] Veselovsky, Veniamin, et al. "Generating Faithful Synthetic Data with Large Language Models: A Case Study in Computational Social Science." arXiv preprint arXiv:2305.15041 (2023).
2. Why is it not possible for our method to perform better than centralized training on certain datasets? Consider this: when you pretrain a model on ImageNet and then use it for certain downstream tasks, it often performs better than training from scratch, such as on the Cars and CUB-200 datasets. Our method provides a well-initialized model in the first round, hence achieving better performance than models trained in a centralized manner.
Please feel free to let us know if there are any further questions. |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | GpT9FR36vo | official_comment | 1,700,668,036,879 | 2VGRuF5mKB | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Reviewer_ZNb2"
] | title: Experimentation with other datasets and the experimental results
comment: I would appreciate the rebuttal of the authors. However, I have two major concerns.
1. The authors mentioned that they have limited time and cannot conduct NLP tasks. However, the authors argue that it is easy to apply the approach to NLP tasks.
2. It is hard to understand how the FL results could be better than centralized approches. I wonder if the authors could explain the indepth reason. |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | YlkSlYahCA | official_comment | 1,700,472,328,177 | 2ZurMVHvCB | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Authors"
] | title: A friendly reminder that the discussion stage will be closed in 2 days
comment: Dear Reviewer,
Thank you once again for your valuable comments. As the discussion stage is coming to a close in 2 days, we kindly request your feedback on whether our response adequately addresses your concerns. We would greatly appreciate any additional feedback you may have.
Thank you in advance!
Kind regards,
Authors |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | 2VGRuF5mKB | official_comment | 1,700,472,292,614 | VulmbS3YYc | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Authors"
] | title: A friendly reminder that the discussion stage will be closed in 2 days
comment: Dear Reviewer,
Thank you once again for your valuable comments. As the discussion stage is coming to a close in 2 days, we kindly request your feedback on whether our response adequately addresses your concerns. We would greatly appreciate any additional feedback you may have.
Thank you in advance!
Kind regards,
Authors |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | RiZgxxrdO0 | official_comment | 1,700,472,247,801 | rBIpJVtNpZ | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Authors"
] | title: A friendly reminder that the discussion stage will be closed in 2 days
comment: Dear Reviewer,
Thank you once again for your valuable comments. As the discussion stage is coming to a close in 2 days, we kindly request your feedback on whether our response adequately addresses your concerns. We would greatly appreciate any additional feedback you may have.
Thank you in advance!
Kind regards,
Authors |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | ca7XdIlXCg | official_comment | 1,700,472,178,098 | uP676dsarr | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Authors"
] | title: A friendly reminder that the discussion stage will be closed in 2 days
comment: Dear Reviewer,
Thank you once again for your valuable comments. As the discussion stage is coming to a close in 2 days, we kindly request your feedback on whether our response adequately addresses your concerns. We would greatly appreciate any additional feedback you may have.
Thank you in advance!
Kind regards,
Authors |
U0P622bfUN | Federated Generative Learning with Foundation Models | [
"Jie Zhang",
"Xiao hua Qi",
"Shengyuan Pang",
"Siyuan Pan",
"Xiaobing Tu",
"Pengfei Wan",
"Bo Zhao"
] | Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters. | /pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf | ticwvWFRNm | official_comment | 1,700,236,375,549 | U0P622bfUN | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission1/Authors"
] | title: Summary
comment: Dear Reviewers and ACs,
Thank you all for your insightful reviews and constructive comments on our manuscript. We greatly appreciate your feedback, which has helped us improve our work. We have carefully considered all the suggestions and made the following changes:
1. We have included three additional challenging datasets from different domains: CUB-200 and Stanford Cars, which are fine-grained image classification datasets, and the EuroSAT satellite image dataset, which is known to be more difficult to generate. We have conducted experiments on these datasets to demonstrate the effectiveness of our method in various scenarios.
2. To showcase the versatility of our proposed approach, we have employed diverse Generative models. By doing so, we aim to demonstrate that our method is not limited to a specific model but can be applied to different models with similar success.
3. In order to provide more robust evidence of the effectiveness and scalability of our proposed method, we have conducted experiments with an increased number of clients. Specifically, we have included experiments with 50 and 100 clients, which further support our findings and demonstrate the scalability of our approach.
4. We have included extensive discussions on the security and privacy aspects of our proposed method. We believe that addressing these concerns is crucial, and we have provided thorough explanations and considerations to ensure the privacy and security of the data used in our experiments.
Thank you once again for your valuable feedback.
Best Regards,
All authors. |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | LMXXBnRdjZ | official_review | 1,698,413,312,588 | J2kRjUAOLh | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission2/Reviewer_GFiJ"
] | summary: This paper proposes a construction approach of positive and negative samples based on the quality of milp problem’s feasible solutions. With the constructed samples, one can train a GNN model to predict good assignments for integer variables using contrastive learning mechanism, which helps search optimal solution more quickly. Superior experimental results demonstrate the effectiveness and generalizability of the proposed approach.
soundness: 2 fair
presentation: 3 good
contribution: 3 good
strengths: The research topic is valuable and the paper is well written. Moreover, the designed method is presented with succinctly and clearly as well as its motivation. The performance of trained GNN is also impressive, which indicates the superiority of the proposed method.
weaknesses: There are still some issues needed to be addressed to make this paper meet the requirement of ICLR:
1. The contribution and novelty is not summarized clearly and relatively weak. The main contributions of this paper are applying contrastive learning to predict and search optimal solution.
2. Results of empirical evaluation can be more solid and convincing. The experiments are just conducted on two generated dataset and one competition dataset, without the recognized authoritative dataset MIPLIB2017 benchmark. Furthermore, only an open-source MILP solver, which is not well configured, is involved in baselines. Considering that different configuration can significantly affect the solver’s performance, I would expect some further comparative experiments conducted on SCIP configured with tuned parameters or some more powerful commercial solvers (like GUROBI and CPLEX).
questions: I noticed that the effect of hyperparameter k0 and k1 is evaluated. Of course, this hyperparameter is important, because it controls the tradeoff between the feasibility and quality of predicted solutions. However, considering that MILP instances generally have different scales of integer variables, a specific number of integer variables may not be a good choice. I was wondering that would it be better if we use the coverage rate (i.e., the ratio of fixed variables in the entire set of integer variables when using predict methods like Neural Diving) to control the fixed number of integer variables.
In addition, some studies indicate that each instance has an unique optimal coverage rate (https://arxiv.org/abs/2308.00327), so I think that evaluating the effect of k0 by just computing an average number on one dataset (CA) may not help readers configure their own prediction model properly.
flag_for_ethics_review: ['No ethics review needed.']
rating: 5: marginally below the acceptance threshold
confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
code_of_conduct: Yes |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | Uipxj4Qg21 | official_review | 1,697,774,654,807 | J2kRjUAOLh | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission2/Reviewer_KTmj"
] | summary: In this paper, the authors propose to integrate contrastive learning with the pipeline of solving mixed integer linear programming. They manage to generate positive samples and negative samples during the training. The positive samples are optimal or near-optimal solutions of MILP, while the negative samples are infeasible or low-quality solutions. The model is then trained by these samples via supervised contrastive learning to predict better solutions. After this, the predicted solutions are improved by the PaS framework to make them become valid optimal solutions. Experiments on multiple datasets show the performance of the proposed ConPas framework.
soundness: 3 good
presentation: 3 good
contribution: 2 fair
strengths: 1. The paper is well-written and easy to follow.
2. The idea of utilizing contrastive learning in MILP looks interesting to me.
3. The experiments contain various MILP datasets.
weaknesses: 1. I find one work in related work[1] very similar to this paper. Both of these two papers propose to utilize contrastive learning in solving MILP and share the core idea of generating positive and negative samples. The only difference is the operation after the contrastive learning part, the ICML paper[1] uses large neighborhood search (LNS) and this ICLR paper uses Predict and Search (PaS). Actually, I think this paper is covered by the ICML paper, as PaS could be regarded as a variant of LNS. Though the authors do mention this ICML paper in the related work, they do not discuss the difference between their work and the ICML paper, nor compare it as a baseline.
2. Though the idea of utilizing contrastive learning in MILP looks interesting, I consider the current usage of contrastive learning to be more like an incremental part. In this work, solving MILP basically relies on the performance of PaS. I am not sure if this contribution is good enough for ICLR. To me, this work is more like using contrastive learning to find a better initialization for PaS, of which the application is limited.
3. The results of experiments look good, but I think more datasets with hard cases are required. In my own experience of using SCIP, I think MVS and MIS are relatively easy for SCIP. In contrast, the datasets from NeurIPS 2021 ML4CO are difficult for SCIP, but it looks like the authors did not select the whole datasets of ML4CO, as they said: "IP instances are taken from the NeurIPS 2021 ML4CO competition Gasse et al. (2022)." I wonder how the data is selected. In fact, there are 3 benchmarks in NeurIPS 2021 ML4CO[2], I wonder why the authors neglect them. Besides, a common dataset MIPLIB is also missing in the paper.
[1] Huang, T., Ferber, A.M., Tian, Y., Dilkina, B. & Steiner, B.. (2023). Searching Large Neighborhoods for Integer Linear Programs with Contrastive Learning. <i>Proceedings of the 40th International Conference on Machine Learning</i>, in <i>Proceedings of Machine Learning Research</i> 202:13869-13890 Available from https://proceedings.mlr.press/v202/huang23g.html.
[2] https://www.ecole.ai/2021/ml4co-competition/
questions: 1. Please discuss your paper with the ICML paper I mentioned in the weakness. In my view, these two papers are very similar and the ICML paper seems to cover your work to some extent. A comparison in experiments is also suggested if possible.
2. As I mentioned before, this work is more like using contrastive learning to find a better initialization for PaS. I wonder can this work be applied to methods other than PaS? e.g. Neural Diving mentioned in the paper.
3. The datasets in the experiments require more improvement.
flag_for_ethics_review: ['No ethics review needed.']
rating: 3: reject, not good enough
confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
code_of_conduct: Yes |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | Ui2FqW5xoH | official_comment | 1,700,367,461,894 | YDnAiTaavU | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission2/Authors"
] | comment: We thank the reviewer for the feedback and suggestions.
Regarding the weaknesses in the novelties and comparison to SCIP and Gurobi (weaknesses 1,2 and 3), please kindly refer to the general responses to all reviewers.
To address the other weaknesses:
2. ConPaS is a solution construction heuristic and we acknowledge that our approach doesn’t guarantee optimality or feasibility. Similarly, this drawback also applies to Neural Diving (ND) [Nair et al, 2020] and PaS [Han et al, 2023]. However, in a distributional setting where one needs to solve similar MIP instances over and over again, approaches like ND and ConPaS can be particularly helpful if it is able to predict solutions that are empirically feasible and of high quality. This is indeed true according to our experiments - on the five MIP benchmarks (including one in the Appendix), we achieve a 100% feasibility rate using a consistent set of hyperparameters on each benchmark, confirming the applicability of these approaches. However, we also acknowledge that ConPaS (or ND and PaS) is not universally applicable to all MIP solving, especially on more constrained problems. For example, using MIP for scientific discoveries when the solutions are sparse could be extra challenging [Deza et al., 2023] and often we need to design other approaches tailored to them. We have added the discussion in the conclusion section.
3. Thank you for this comment. We use Gurobi to collect data since Gurobi typically runs a lot faster than SCIP. For data collection, we set the time limit to an hour for Gurobi. We could easily replace Gurobi with SCIP for data collection and get the same-quality training data but this comes at the cost of 4-8 times (4-8 hours per instance) longer runtime on average. Due to our limited computational resources, using Gurobi for data collection is more practical for us.
We have included results on Gurobi in Appendix Section D.2 in the updated draft. We show that ConPaS still outperforms Gurobi and PaS significantly in terms of both primal integral and primal gap.
4. The main motivation to design negative samples this way is that we want them to be close to positive samples in the input space but actually with very different quality (i.e., near miss). From a theoretical point of view, the InfoNCE loss we use has the property that it will automatically focus on hard negative pairs (i.e., samples with similar representation but of very different qualities) and learn representations to separate them apart (See e.g., [Tian 2022]). While our approach is built upon a theoretical understanding of contrastive learning, we acknowledge that our work designs the negative samples heuristically and does not aim for theoretical impacts. On the other hand, we believe that our work contributes a new principled method that demonstrates strong empirical performance in challenging domains.
5. Regarding the accuracy of the predicted solutions, we would like to point out that the prediction accuracy doesn’t strongly correlate with the performance of the downstream task where the predictions are used (in this paper, the downstream task is the search phase). The ML model is trained on multiple solution samples and when deployed in the search, we use only a part of the predictions controlled by the hyperparameters. Therefore, there is no standard way to quantify the accuracy of the ML predictions in this setting that captures the downstream performance.
[Nair et al., 2020] Solving mixed integer programs using neural networks, Arxiv 2020.
[Han et al., 2023] A GNN-guided predict-and-search framework for mixed-integer linear programming. ICLR 2023
[Deza et al., 2023] Fast Matrix Multiplication Without Tears: A Constraint Programming Approach. CP 2023
[Tian 2022] Understanding Deep Contrastive Learning via Coordinate-wise Optimization. NeurIPS 2022. |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | KR3sPVMqql | official_comment | 1,700,367,348,197 | OZRNv4Pv8d | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission2/Authors"
] | title: General Response 2/2
comment: **Choice of MIP Problem Benchmark**
We would like to respectfully argue that the benchmark problems used in our paper are already challenging enough for existing MIP solvers such as SCIP and Gurobi, as shown by the results reported in Sections 5.2 and D.2. These benchmarks have indeed been used in various previous studies [Han et al., 2023; Huang et al., 2023; Wu et al., 2021]. We use even larger scale instances for combinatorial auction and independent set problems compared to the closely related recent work [Han et al., 2023].
We would like to clarify that we also use two problem domains (Item placement and workload appointment) from the NeurIPS 2021 ML4CO competition. The results of the workload appointment problems are reported in the Appendix due to being not challenging enough for our setting. We use the same train/validation/test split as suggested by the organizers (we use only 400 instances from their train set though 9,900 instances are given). We would also want to respectfully disagree with the claims that instances from ML4CO competition are harder than the other benchmarks. In the competition, they are indeed hard since the rules of the competition require all heuristics in SCIP (including restart and primal heuristics) to be turned off. However, in our paper, we allow all those options and fine-tune them for our SCIP baseline to maximize its performance. We also found that the workload appointment problem is indeed too easy for approaches like PaS and ConPaS.
We agree with reviewers KTmj and GFiJ that MIPLIB is indeed an important MIP benchmark. However, there are few successful cases of ML-based methods for MIP solving to learn heuristics that can generalize to heterogeneous collections of real-world instances like MIPLIB that are diverse in their sizes, domains and structures. Following the majority of previous work, in our paper, we focus on distributional settings for MIP solving that are also important in real-world applications. However, we believe it is important for future work to develop methods that are generalizable to diverse MIP instances.
[Nair et al., 2020] Solving mixed integer programs using neural networks, Arxiv 2020.
[Han et al., 2023] A GNN-guided predict-and-search framework for mixed-integer linear programming. ICLR 2023
[Huang et al., 2023] Searching Large Neighborhoods for Integer Linear Programs with Contrastive Learning. ICML 2023
[Wu et al., 2021] Learning Large Neighborhood Search Policy for Integer Programming. NeurIPS 2021. |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | 50D8TvJWne | official_comment | 1,700,367,506,949 | LMXXBnRdjZ | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission2/Authors"
] | comment: We thank the reviewer for the feedback and suggestions.
Regarding the weaknesses, please refer to the discussions on the novelties of the work and choices of benchmark in the general response to all reviewers.
Below are our responses to answer the question regarding hyperparameters and coverage rates:
We agree that using coverage rates as alternatives to model k0 and k1 would be more helpful when the instances are diverse in size. In our paper, we described a systematic way in Section 5.1 “Hyperparameters” to tune both k0 and k1 based on a percentage of the number of variables (10%-50%). We believe that this hyperparameter tuning method is easy to follow. We report the results of different k0 for CA to demonstrate how tuning could be done.
Regarding the optimal coverage rate studied in [Yoon et al., 2023], it is important for methods like Neural Diving (ND) since it requires training a separate model for each coverage rate. With an optimal coverage rate identified, it helps overcome the training inefficiency of ND. However in ConPaS, instead of fixing all variables according to the prediction, we let the MIP solver explore regions around the prediction that allows more flexibility and room for inaccuracy in prediction, therefore removing the need for an accurate coverage threshold.
[Yoon et al., 2023] Threshold-aware Learning to Generate Feasible Solutions for Mixed Integer Programs. Arxiv 2023. |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | 0kNBOJMhsI | official_comment | 1,700,367,634,501 | Uipxj4Qg21 | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission2/Authors"
] | comment: We thank the reviewer for the feedback and suggestions. Regarding the weaknesses and questions concerning the novelties and choices of MIP benchmark, please kindly refer to the general responses.
In addition, we would like to further discuss how this work could be applied beyond PaS to answer your 2nd question: ConPaS is more versatile since the prediction coming out of its ML model can be useful in different ways. An example is to warm start LNS as mentioned earlier. In addition, one could leverage the ML prediction from ConPaS to assign variable branching priorities and/or generate cuts in tree searches such as branch-and-bound (or branch-and-cut) search. We defer the deployment of ConPaS in different algorithms to future work.
We also want to clarify that Neural Diving is a more restricted variant of ConPaS and PaS, where it corresponds to setting \Delta = 0 in PaS that allows no change of the assigned values once they’re fixed in the search. |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | 8Yg7Up2BrI | official_comment | 1,700,367,555,183 | PMdcjp79U4 | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission2/Authors"
] | comment: We thank the reviewer for the feedback and suggestions. Regarding the weaknesses concerning the novelties, choices of MIP benchmark and using SCIP as a baseline (weaknesses 1,2 and 3), please kindly refer to the general responses to all reviewers posted at the top.
Regarding weaknesses 4 and 5:
(4) We conduct an additional ablation study on ConPaS-LQ on the MVC and CA problems. (Due to limited computation resources, we are still in the process of getting results for ConPaS-inf and other problems.)
The initial results are shown in the table below, where ConPaS-LQ (unweighted) refers to training using the original InfoNCE function without considering different qualities of the samples and ConPaS-LQ (weighted) refers to training using the modified loss. When we use the original loss function, ConPaS is still able to outperform PaS. Its performance further improves when the modified loss function is used.
| | MVC | | CA | |
|------------------------|------------|-----------------|------------|-----------------|
| | Primal Gap | Primal Integral | Primal Gap | Primal Integral |
| PaS | 0.17% | 13.9 | 1.16% | 28.9 |
| ConPaS-LQ (unweighted) | 0.12% | 3.3 | 0.57% | 24.3 |
| ConPaS-LQ (weighted) | 0.10% | 2.8 | 0.16% | 19.7 |
(5) We thank the reviewer for the suggestions for a more accurate description for solvers like Gurobi and CPLEX. We have updated the text accordingly in the new draft. |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | OZRNv4Pv8d | official_comment | 1,700,367,307,631 | J2kRjUAOLh | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission2/Authors"
] | title: General Response
comment: We are grateful to all reviewers for their time and helpful suggestions. In this general response, we address concerns and answer questions that come from multiple or all reviews. We summarize our responses and additional findings in the rebuttal text, and we encourage reviewers to look at the updated paper draft uploaded to OpenReview. The updated draft contains new experimental results comparing against Gurobi and a few edits to improve the clarity of the paper. The changes are highlighted in blue for visibility.
**Novelties of ConPaS and its Differences from CL-LNS [Huang et al, ICML 2023]**
**Differences**: We would like to clarify that our work ConPaS and existing work CL-LNS published at ICML 2023 this year are complementary to each other. More specifically,
ConPaS learns to construct a high-quality (partial) solution from scratch and then find it,
CL-LNS learns to predict the part of a given solution that is not good enough and then improve it.
One could apply ConPaS to warm start CL-LNS (or any other Large Neighborhood Search (LNS) methods). This is similar to the relationship between Neural Diving (a solution construction method) and the ML-guided LNS (a solution improvement method) demonstrated in [Nair et al., 2020].
Furthermore, while CL-LNS has a limited application to only Large Neighborhood Search, the prediction from ConPaS’s ML model can be useful in different search algorithms for MIP. An example is to warm-start LNS as mentioned above. In addition, one could leverage the ML prediction from ConPaS to assign variable branching priorities and/or generate cuts to improve the performance of tree searches such as branch-and-bound (or branch-and-cut) search. We defer the deployment of ConPaS in those algorithms to future work.
**Novelties**: While both ConPaS and CL-LNS use contrastive learning for MIP solving, we would like to point out our main novelties: (i) We design a novel data collection process with considerations of two types of negative samples. Finding the negative samples is not straightforward especially when using low-quality solutions as negative samples. In that case, we leverage the techniques of local branching (that are more often used to find improved solutions) to find bad solutions that are similar to good ones and formulate it as a nontrivial bilevel optimization problem; (ii) We design a novel contrastive loss function to take into account positive samples with different solution qualities; (iii) We demonstrate strong empirical performance of ConPaS measured by various metrics and we also believe that our work contributes a new and valuable empirical method
**Comparisons with SCIP**
Regarding comparisons with SCIP, we mentioned in the paper that we indeed fine-tuned SCIP heuristic setting in our experiments. Specifically, we set SCIP’s heuristic mode to AGGRESSIVE to focus on primal bound improvement and we also allow presolving and restart heuristics in SCIP. We have made these details clear and highlighted them in the revised draft.
It is a common practice in the MIP-solving community to present SCIP results for completeness. We want to be clear that we do not intend to make big statements about outperforming SCIP, since the main competitors of ConPaS are the other ML-based approaches - ND [Nair et al., 2021] and PaS [Han et al., 2023].
**Comparisons with Gurobi**
We would like to point out that ConPaS is agnostic to the underlying MIP solver that is used in the Predict-and-Search phase. It could be applied to SCIP, Gurobi or CPLEX. In our paper, we demonstrate the effectiveness of ConPaS using SCIP as the solver but it could also be built upon Gurobi. We have included results on Gurobi in Appendix Section D.2 in the updated draft. Due to limited computation resources, we run experiments with Gurobi, PaS [Han et al., 2023] and ConPaS on MVC, MIS and CA instances. **The results show that ConPaS outperforms Gurobi significantly in terms of both the primal gap and primal integral performances.** |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | 2W4cMY9YtS | official_comment | 1,700,552,524,218 | Uipxj4Qg21 | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission2/Reviewer_KTmj"
] | comment: Thank you very much for the detailed response and the improved quality of the paper. However, I think the main concern of this paper is still the difference between ConPas and CL-LNS. I understand that they shall be complementary to each other, but I still believe that the approaches of these two works are similar or at least with strong correlation, as mentioned by other reviewers. Therefore, I think the authors should include the discussion between ConPas and CL-LNS in the **main paper**, instead of just mentioning it as related works, otherwise, it will be suspected of deliberately avoiding. As the authors use a lot of space in the general response to describe the difference in their general response, you can not suppose the readers of your paper understand the difference just by mentioning and citing it. Due to the similarity of ConPas and CL-LNS, It's not an exaggeration to open a separate subsection, which could include a discussion of differences or add a table of comparison. Only in this way, the readers can fully understand the novelty of this work. |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | yfta5giyPF | official_comment | 1,700,605,621,249 | 2W4cMY9YtS | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission2/Authors"
] | comment: We would like to thank the reviewer for reading our rebuttal and your valuable suggestion on addressing the differences between ConPaS and CL-LNS in the main paper. We agree that it is important for the readers to understand the differences. We have added a paragraph at the end of the related work section highlighted in blue to address this issue. Please kindly let us know if any concerns remain. |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | 5GrMsgr4HJ | official_comment | 1,700,609,479,183 | J2kRjUAOLh | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission2/Authors"
] | comment: Dear reviewers,
Thank you again for taking the time to review our paper. We would be grateful if you could kindly check whether our response has answered your questions, and let us know if any issues remain. In addition to our rebuttal response, we have worked to respond to the points raised by the reviewers and submitted a revision.
Here is a summary of our effort to improve the paper draft:
1. We added experimental results comparing the performance with Gurobi where ConPaS still shows significant improvement over the baselines.
2. We added a discussion to the end of Section 3 to address the concerns about the novelties and discuss the differences between CL-LNS and ConPaS.
3. We improved the writing of the paper by improving clarity as well as adding and highlighting some important details.
If you find the responses and revisions align well with the paper's objectives and address your initial concerns, we are hopeful that an adjustment in the score could reflect these improvements. Please feel free to ask if you have more questions or if there's anything else we can provide to support your evaluation. |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | qBOwsxbPuW | official_comment | 1,700,664,835,850 | 8Yg7Up2BrI | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission2/Reviewer_nvLV"
] | comment: Dear authors,
Thank you for these results. These results are very good to know.
Let me emphasize that the statement in bold at the end of your general response does not interest me, and it shouldn't interest the other reviewers either.
I now understand the novelty of the paper to be the way you compute the negative examples for the contrastive learning and that there are key differences to CL-LNS. This puts the paper in somewhat of a different light. I don't usually raise scores by this much, but actually the bilevel model for computing negative examples is rather clever and really works well. I encourage the other reviewers to take this into account. I will adjust my review. |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | PMdcjp79U4 | official_review | 1,698,406,646,855 | J2kRjUAOLh | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission2/Reviewer_nvLV"
] | summary: The paper presents a method for finding primal solutions to mixed-integer programs using a graph neural network-based approach. The training and performance of the approach is improved through the use of constrastive learning, which has been gaining popularity in a variety of deep reinforcement learning applications due to the fact that it does not require expensive labeling of data to "pre-train" networks. The approach is based on the "predict and search" method from a previous ICLR paper. The approach is evaluated experimentally on several relatively easy MIP problems and a dataset of integer programs from the NeurIPS 2021 ML4CO competition.
soundness: 3 good
presentation: 3 good
contribution: 1 poor
strengths: - Contrastive learning shows great promise in the space of combinatorial optimization; we see again and again that it is an effective mechanism for reducing training time and creating great models.
- The empirical performance of the method on the datasets tested is quite strong.
- (Updated) The novelty of the paper, while not huge, is sufficient for ICLR. The authors have indicated how it differs from CL-LNS, and the bi-level model is an interesting contribution that other groups solving MIPs will want to consider.
weaknesses: - The instance dataset is not so great, but I admit there are not so many good public MIP problems out there. Simply put, claiming that you can solve the CA dataset to a MIP person is just really not that interesting. Since all the other MIP papers at ICLR/NeurIPS seem to have the same problem, I'll let it pass.
- Using SCIP as a direct point of comparison is not really fair. SCIP is trying to prove optimality, while the method proposed in this work is just a primal heuristic. I appreciate, however, that the authors do not make big claims about beating SCIP the way some papers in this area do. They do seem to understand that beating SCIP is relatively meaningless.
- I am a little surprised to not see an abalation study on the modified loss function. (Update: the authors have provided one, and the modified loss works and is not the only reason it is outperforming previous work)
- The introduction's description of Gurobi and CPLEX is not complete. They are really branch and cut algorithms with (what CPLEX calls) "dynamic search" (and a whole bunch of other stuff, who knows what half of it is...) (Update: this seems to be fixed)
- (Update) I still feel like there could be more experimentation regarding the negative examples (e.g., versus the strategy in the CL-LNS paper??). Since this is the main contribution, I wish it was actually more in focus throughout the paper.
questions: All questions have been answered.
flag_for_ethics_review: ['No ethics review needed.']
rating: 6: marginally above the acceptance threshold
confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
code_of_conduct: Yes |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | IquAq42HFZ | official_comment | 1,700,667,810,729 | Ui2FqW5xoH | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission2/Reviewer_fyvF"
] | comment: I thank the authors for giving the detailed response and new experimental results. However, some of my concerns remains.
1. Regarding the novelty over CL-LNS, I think "complementary" is not very sufficient to justify the differences or novelty. Moreover, the authors claimed two novelties, the negative data collection method and new loss function, but they do not provide any ablation study to support their advantages.
2. Regarding the prediction accuracy, I am not satisfied with the response. If the prediction has low impact on the downstream tasks, then why you need a prediction after all? If a poor prediction can also lead to a good final performance, then I suspect the meaning and usefulness of the ML part, and the performance improvement may come from tuning other hyperparameters. So accuracy is important, because it justifies your core contribution which is an ML component. Also, I do not agree with the last statement in response 5. Prediction accuracy is very easy to quantify, and we do not need to involve the downstream tasks here.
I will increase my score, but still believe that this paper needs further improvement. |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | lLVDYQu4XO | official_comment | 1,700,703,703,268 | IquAq42HFZ | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission2/Authors"
] | comment: We thank the reviewer for taking the time to read our rebuttal and the follow-up feedback. Regarding the remaining concerns:
1. We propose novel data collection methods to find negative samples that are new and specifically designed for our solution prediction task. Existing methods for finding negative samples such as the one proposed for CL-LNS do not directly apply here.
For the modified contrastive loss function, we conduct an additional ablation study on ConPaS-LQ on the MVC and CA problems.
The initial results are shown in the table below, where ConPaS-LQ (unweighted) refers to training using the original InfoNCE function without considering different qualities of the samples and ConPaS-LQ (weighted) refers to training using the modified loss. When we use the original loss function, ConPaS is still able to outperform PaS. Its performance further improves when the modified loss function is used.
| | MVC | | CA | |
|------------------------|------------|-----------------|------------|-----------------|
| | Primal Gap | Primal Integral | Primal Gap | Primal Integral |
| PaS | 0.17% | 13.9 | 1.16% | 28.9 |
| ConPaS-LQ (unweighted) | 0.12% | 3.3 | 0.57% | 24.3 |
| ConPaS-LQ (weighted) | 0.10% | 2.8 | 0.16% | 19.7 |
2. We report the prediction accuracy quantified by the classification accuracy over all binary variables (with the threshold set to 0.5) in the following table. We report it for both PaS and ConPaS-LQ on the MVC and CA problems on 100 validation instances. The accuracy is the fraction of correctly classified variables averaged over 50 positive samples for each instance, and we report the average accuracy over 100 validation instances. Since the classification accuracy is sensitive to the threshold, we also report the AUROC.
On the MVC instances, though ConPaS has a lower accuracy (w.r.t. Threshold =0.5), it has higher AUROC than PaS. On the CA instances, their accuracies and AUROCs are similar. We would like to again point out that a better accuracy/AUROC doesn't necessarily indicate a better downstream task performance, even though we believe they are correlated.
| | MVC | | CA | |
|-----------|----------|-------|----------|-------|
| | Accuracy | AUROC | Accuracy | AUROC |
| PaS | 81.2% | 0.88 | 88.3% | 0.87 |
| ConPaS-LQ | 76.9% | 0.91 | 86.9% | 0.86 | |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | YDnAiTaavU | official_review | 1,698,824,385,414 | J2kRjUAOLh | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission2/Reviewer_fyvF"
] | summary: The authors propose a predict-and-search approach for solving mixed integer programming(MIP), according to a GNN-guided approach from [Han2022]. The algorithm collects high-quality solutions as positive samples and low-quality solutions as negative samples, and then trains the prediction model by contrastive learning. The authors demonstrate that the proposed method outperforms the baseline on four commonly used mixed-integer linear programming datasets.
soundness: 3 good
presentation: 3 good
contribution: 2 fair
strengths: 1. The effect of improving the prediction model through contrastive learning training method is intuitive and effective.
2. The author's experiments show that the proposed method has a significant improvement over the baseline.
3. The paper is mostly well-written and easy to follow.
weaknesses: 1. The technical novelty is limited. First, it is a somewhat straightforward application of contrastive learning to predict-and-search. Second, the proposed method is essentially the same as the ICML 2023 paper [Huang2023] (Figure 1 of this paper almost coincides with Figure 1 in [Huang2023]), if we consider the procedure as a one-step LNS.
2. Since the proposed approach is based on predict-and-search, it cannot guarantee the optimality or feasibility. This limitation is not discussed or analyzed properly in this paper. For example, there is no empirical study on the feasibility ratio on the test instances. The authors should also conduct experiments on more constrained problems. Furthermore, it is somewhat unfair to compare the anytime performance with SCIP, since the proposed method (as well as predict-and-search) essentially solves a much simpler problem than SCIP since some variables are fixed.
3. The authors collected training data using Gurobi, but only compared the test performance with SCIP. I cannot see any reason why not compare with Gurobi at test time.
4. The authors used two ways to collect negative samples, but only report their empirical performance, without a deep analysis on which way is more reasonable.
5. The authors did not show the results of how accurate the solution prediction is.
questions: Please see the above weaknesses.
flag_for_ethics_review: ['No ethics review needed.']
rating: 5: marginally below the acceptance threshold
confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
code_of_conduct: Yes |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | sCtJyx7nrc | meta_review | 1,701,823,479,144 | J2kRjUAOLh | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission2/Area_Chair_JTNe"
] | metareview: This paper proposes a neural model trained with contrastive learning for solving mixed integer linear programs. It obtains the solution using a recently proposed predict-and-search (PaS) strategy and empirically demonstrates the advantage of using contrastive learning on top of PaS.
All reviewers agree on the good presentation of this paper and the soundness of this work. The experiments on four datasets also show good results. Nonetheless, the novelty of this work is rather limited, as merely an incremental change over PaS (fyvF, GFiJ, KTmj) and the similarity to a recent paper on contrastive learning for large-neighborhood-search (fyvF, KTmj). Also, the lack of the ablation study (fyvF, nvLV) makes the effectiveness of the proposed components questionable. The authors have addressed part of the concerns during the rebuttal. However, the overall lack of novelty remains a major issue. Also, a better understanding of the source of performance improvement is needed.
A rejection is recommended.
justification_for_why_not_higher_score: This paper lacks novelty and the ablation study is not well conducted.
justification_for_why_not_lower_score: N/A |
J2kRjUAOLh | Contrastive Predict-and-Search for Mixed Integer Linear Programs | [
"Taoan Huang",
"Aaron M Ferber",
"Arman Zharmagambetov",
"Yuandong Tian",
"Bistra Dilkina"
] | Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found. | /pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf | irRrnFXFjA | decision | 1,705,406,011,934 | J2kRjUAOLh | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Reject |
ZGBOfAQrMl | Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention | [
"Xingyu Zhou",
"Leheng Zhang",
"Xiaorui Zhao",
"Keze Wang",
"Leida Li",
"Shuhang Gu"
] | Recently, Vision Transformer has achieved great success in recovering missing details in low-resolution sequences, i.e. the video super-resolution (VSR) task.
Despite its superiority VSR accuracy, the heavy computational burden as well as the large memory footprint hinders the deployment of Transformer-based VSR models on constrained devices, e.g. smart
phones and consumer electronic products.
In this paper, we address the above issue by proposing a novel feature-level masked processing framework: VSR with Masked Intra and inter frame Attention (MIA-VSR).
The core of MIA-VSR is leveraging feature-level temporal continuity between adjacent frames to reduce redundant computations and make more rational use of previously enhanced SR features.
Concretely, we propose an intra-frame and inter-frame attention block which takes the respective roles of past features and input
features into consideration and only exploits previously enhanced features to provide supplementary information.
In addition, an adaptive block-wise mask predicting module is developed to skip unimportant computations according to feature similarity between adjacent frames.
We conduct detailed ablation studies to validate our contributions and compare the proposed method with recent state-of-the-art VSR approaches.
The experimental results demonstrate that MIA-VSR improves
the memory and computation efficiency over state-of-the-art methods, without trading off PSNR accuracy. | /pdf/50d9a5757c70e20b8a2b2c3de85840db57e6d597.pdf | SkOC9YExMC | official_review | 1,698,846,203,387 | ZGBOfAQrMl | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission3/Reviewer_kfYa"
] | summary: This paper presents a novel Transformer-based video super-resolution model called MIA-VSR (Masked Intra and Inter-frame Attention Video Super-Resolution). The model aims to improve the efficiency of video super-resolution by leveraging temporal continuity between adjacent frames and reducing redundant computations. The key components of MIA-VSR include an intra-frame and inter-frame attention block (IIAB) and an adaptive mask predicting module.
soundness: 3 good
presentation: 3 good
contribution: 3 good
strengths: 1. Improved efficiency: MIA-VSR reduces computational complexity and memory footprint without sacrificing video super-resolution performance.
2. Effective use of temporal information: The model leverages temporal continuity between frames to avoid unnecessary computations and provide better results.
3. Adaptive masking: The adaptive mask predicting module generates block-wise masks to skip unimportant computations, further improving efficiency.
weaknesses: 1. Complexity: The model may be more complex to implement and train compared to simpler video super-resolution methods.
2. Limited applicability: The effectiveness of MIA-VSR may be limited to specific video super-resolution tasks and datasets.
3. Runtime: Although MIA-VSR reduces computational complexity, its runtime may still be slower than some other methods due to the Transformer architecture.
questions: 1. In the comparison with state-of-the-art methods, you mentioned that MIA-VSR achieves better trade-offs between accuracy and efficiency. How does MIA-VSR handle the trade-off between model size and computational efficiency? Can you provide more quantitative analysis or visualizations to support this claim?
2. Can you provide some insights on the design choices for the Intra-frame and Inter-frame Attention Block (IIAB)? How does it differ from other attention mechanisms used in video super-resolution models?
flag_for_ethics_review: ['No ethics review needed.']
rating: 6: marginally above the acceptance threshold
confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
code_of_conduct: Yes |
ZGBOfAQrMl | Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention | [
"Xingyu Zhou",
"Leheng Zhang",
"Xiaorui Zhao",
"Keze Wang",
"Leida Li",
"Shuhang Gu"
] | Recently, Vision Transformer has achieved great success in recovering missing details in low-resolution sequences, i.e. the video super-resolution (VSR) task.
Despite its superiority VSR accuracy, the heavy computational burden as well as the large memory footprint hinders the deployment of Transformer-based VSR models on constrained devices, e.g. smart
phones and consumer electronic products.
In this paper, we address the above issue by proposing a novel feature-level masked processing framework: VSR with Masked Intra and inter frame Attention (MIA-VSR).
The core of MIA-VSR is leveraging feature-level temporal continuity between adjacent frames to reduce redundant computations and make more rational use of previously enhanced SR features.
Concretely, we propose an intra-frame and inter-frame attention block which takes the respective roles of past features and input
features into consideration and only exploits previously enhanced features to provide supplementary information.
In addition, an adaptive block-wise mask predicting module is developed to skip unimportant computations according to feature similarity between adjacent frames.
We conduct detailed ablation studies to validate our contributions and compare the proposed method with recent state-of-the-art VSR approaches.
The experimental results demonstrate that MIA-VSR improves
the memory and computation efficiency over state-of-the-art methods, without trading off PSNR accuracy. | /pdf/50d9a5757c70e20b8a2b2c3de85840db57e6d597.pdf | UKSQxS1KNk | official_review | 1,698,743,938,226 | ZGBOfAQrMl | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission3/Reviewer_oDvo"
] | summary: To address the heavy computational burden and large memory footprint in Transformer-based Video Super-resolution (VSR), this paper proposes a masked intra and inter frame attention (MIA-VSR). MIA-VSR uses feature-level temporal continuity between adjacent frames. The experiments demonstrate the effectiveness of the proposed method.
soundness: 2 fair
presentation: 2 fair
contribution: 2 fair
strengths: 1. This paper proposes an intra-frame and inter-frame attention block to enhance SR features, and proposes an adaptive mask predicting module to mask out unimportant regions between adjacent frames.
2. Compared with existing Transformer-based VSR methods, the proposed method has less computational cost and memory footprints.
weaknesses: 1. The novelty of this paper is not clear.
2. The performance is not significant on benchmark datasets. Although the proposed method has less computational cost and memory footprints compared with existing Transformer-based VSR methods, it is still challenging for applications on smartphones (the main issue that the authors highlight to solve.)
questions: 1. The motivations of the paper are to reduce the computational burden and the large memory footprint, and propose a VSR method in smart phones and consumer electronic products. However, the model size is large and not very efficient. For real applications, BasicVSR++ has more advantages than the proposed MIA-VSR. Compared with MIA-VSR, RVRT has a smaller model size, less runtime and comparable PSNR.
2. Some details in Figure 1 are not clear. For example, the inputs of MPM are not clear. How to get x_m^{t-2} in the orange block? What are the blue dashed lines? Why are the output video results poor?
3. The performance is not significant under different metrics. In addition, in Figure 4, it would be better to provide BasicVSR++ results instead of BasicVSR or EDVR.
flag_for_ethics_review: ['No ethics review needed.']
details_of_ethics_concerns: None
rating: 3: reject, not good enough
confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
code_of_conduct: Yes |
ZGBOfAQrMl | Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention | [
"Xingyu Zhou",
"Leheng Zhang",
"Xiaorui Zhao",
"Keze Wang",
"Leida Li",
"Shuhang Gu"
] | Recently, Vision Transformer has achieved great success in recovering missing details in low-resolution sequences, i.e. the video super-resolution (VSR) task.
Despite its superiority VSR accuracy, the heavy computational burden as well as the large memory footprint hinders the deployment of Transformer-based VSR models on constrained devices, e.g. smart
phones and consumer electronic products.
In this paper, we address the above issue by proposing a novel feature-level masked processing framework: VSR with Masked Intra and inter frame Attention (MIA-VSR).
The core of MIA-VSR is leveraging feature-level temporal continuity between adjacent frames to reduce redundant computations and make more rational use of previously enhanced SR features.
Concretely, we propose an intra-frame and inter-frame attention block which takes the respective roles of past features and input
features into consideration and only exploits previously enhanced features to provide supplementary information.
In addition, an adaptive block-wise mask predicting module is developed to skip unimportant computations according to feature similarity between adjacent frames.
We conduct detailed ablation studies to validate our contributions and compare the proposed method with recent state-of-the-art VSR approaches.
The experimental results demonstrate that MIA-VSR improves
the memory and computation efficiency over state-of-the-art methods, without trading off PSNR accuracy. | /pdf/50d9a5757c70e20b8a2b2c3de85840db57e6d597.pdf | XgK5VCdL4l | official_review | 1,698,637,121,669 | ZGBOfAQrMl | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission3/Reviewer_FTQC"
] | summary: The paper proposes a new framework called MIA-VSR for video super-resolution (VSR) tasks. The framework utilizes a feature-level masked processing approach to reduce computational burden and memory footprint, making it more suitable for deployment on constrained devices. The key component of MIA-VSR is the intra-frame and inter-frame attention block, which considers the roles of past features and input features and only uses previously enhanced features to provide supplementary information. Additionally, an adaptive block-wise mask predicting module is developed to skip unimportant computations based on feature similarity between adjacent frames. Ablation studies and comparisons with state-of-the-art VSR methods demonstrate that MIA-VSR improves memory and computation efficiency without sacrificing PSNR accuracy.
soundness: 2 fair
presentation: 2 fair
contribution: 2 fair
strengths: 1. The author tries to accelerate transformer-based VSR from multiple levels, and I think masked processing is reasonable.
2. The comparative experiments are objective and detailed.
weaknesses: 1. My main concern is that the effect of this method is not significantly improved compared to previous work.
1.1 From Table 3, choosing transformer for this task does not introduce obvious benefits, especially since CNN can be further compatible with more inference acceleration frameworks. Compared with BasicVSR++, subsequent work uses an order of magnitude higher computational overhead, but it has not made an improvement that I think is worth it. The impression this paper gives me is that it hopes to improve the practicality of this type of method by improving the processing efficiency of VSR. However, the effect of the paper does not seem to achieve this goal. After all, basicVSR++ is already very slow for users.
1.2 In terms of visual effects comparison, overall there seems to be no significant advantage over PSRT-recurrent.
2. About masked processing
2.1 I'm worried it's not novel enough. In low-level vision, this kind of blocking processing is not uncommon. Here are a few examples:
* Image SR: Restore Globally, Refine Locally: A Mask-Guided Scheme to Accelerate Super-Resolution Networks
* Background Matting: Real-Time High-Resolution Background Matting
Although this paper does so by considering temporal continuity. Considering the effects shown in Table 1, I think this contribution is insufficient.
questions: 1. How to avoid the blocking effect that mask processing may introduce?
2. I think Figure 1 needs to be redrawn, what is the key message this image is trying to highlight?
3. If generating 720p video requires one second per frame to process, in what scenario do we need video SR?
flag_for_ethics_review: ['No ethics review needed.']
rating: 5: marginally below the acceptance threshold
confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
code_of_conduct: Yes |
ZGBOfAQrMl | Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention | [
"Xingyu Zhou",
"Leheng Zhang",
"Xiaorui Zhao",
"Keze Wang",
"Leida Li",
"Shuhang Gu"
] | Recently, Vision Transformer has achieved great success in recovering missing details in low-resolution sequences, i.e. the video super-resolution (VSR) task.
Despite its superiority VSR accuracy, the heavy computational burden as well as the large memory footprint hinders the deployment of Transformer-based VSR models on constrained devices, e.g. smart
phones and consumer electronic products.
In this paper, we address the above issue by proposing a novel feature-level masked processing framework: VSR with Masked Intra and inter frame Attention (MIA-VSR).
The core of MIA-VSR is leveraging feature-level temporal continuity between adjacent frames to reduce redundant computations and make more rational use of previously enhanced SR features.
Concretely, we propose an intra-frame and inter-frame attention block which takes the respective roles of past features and input
features into consideration and only exploits previously enhanced features to provide supplementary information.
In addition, an adaptive block-wise mask predicting module is developed to skip unimportant computations according to feature similarity between adjacent frames.
We conduct detailed ablation studies to validate our contributions and compare the proposed method with recent state-of-the-art VSR approaches.
The experimental results demonstrate that MIA-VSR improves
the memory and computation efficiency over state-of-the-art methods, without trading off PSNR accuracy. | /pdf/50d9a5757c70e20b8a2b2c3de85840db57e6d597.pdf | 3zFTJy36YZ | official_review | 1,698,502,710,527 | ZGBOfAQrMl | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission3/Reviewer_FxaB"
] | summary: This paper proposes a Transformer-based recurrent video super-resolution model, termed MIA-VSR.
The aim is to reduce the redundant computation in VSR model.
To achieve this goal, they propose two complements: An Intra-frame and Inter-frame attention block (IIAB or MIA) and an adaptive mask predicting module (MPM).
MIA aims to provide supplementary information from the previous enhanced features (temporal information).
MPM aims to generate block-wise masks to reduce the computation.
Experiments show that MIA-VSR achieves good results on several datasets.
soundness: 2 fair
presentation: 2 fair
contribution: 2 fair
strengths: 1. In the field of video super-resolution, redundant computation is very worthy of study. The problem explored in this paper is meaningful, and the calculation of feature reduction through mask strategy sounds reasonable.
2. The key idea is simple and easy to understand. From the experimental results, the method in this paper seems to be effective.
weaknesses: 1. The core problem to be solved in this paper is the redundant computation in VSR, so the MASK strategy is proposed to reduce the calculation of unimportant features. While designing attention mechanisms to take advantage of timing information has been discussed in many previous works, the second contribution of this article (MIA)does not seem to differ from existing attention mechanisms.
In other words, directly calculating the attention mechanism through the output features after the mask strategy is also computation-intensive. If the author claims the contribution of its attention mechanism, it should be contrasted with this baseline.
2. In terms of reducing redundant computation through the mask strategy, the authors should discuss how it differs from other approaches such as Token Merging and TTVSR. TTVSR also reduces the computation by limiting the attention mechanism to the trajectory of optical flow by calculating the temporal relationship of optical flow. Authors should cite and discuss the differences.
3. Experiments. MASK has a binary ratio, and the author should perform ablation experiments on it, including the FLOPs, not just choose 0.5. Many new VSR methods are not referenced and compared, such as TTVSR, FTVSR.
4. Writing. The second paragraph of the intro is written like related work. The figures in this paper are messy and not easy to understand, e.g. Fig 1, so many arrows are misunderstood. For example, why two arrows refer to mask M? Is it the output of MPM?
5. The author claimed that aims to solve the problem of heavy computational burden. According to Tab 9, compared with basicvsr++, 7.3M/92ms/32.39dB, MIA-VSR achieves 16.5M/822ms/32.78dB. The results of this experiment do not show its advantages.
To sum up, I think the innovation and contribution of this paper are not obvious enough. Writing and experimentation are not enough. This paper is not sufficient for acceptance by ICLR.
questions: see weakness
flag_for_ethics_review: ['No ethics review needed.']
rating: 3: reject, not good enough
confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
code_of_conduct: Yes |
Ny150AblPu | Exposing Text-Image Inconsistency Using Diffusion Models | [
"Mingzhen Huang",
"Shan Jia",
"Zhou Zhou",
"Yan Ju",
"Jialing Cai",
"Siwei Lyu"
] | In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation. | /pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf | 5QZYQUEu7b | official_review | 1,698,783,361,796 | Ny150AblPu | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission5/Reviewer_qTYR"
] | summary: This paper studies how to detect image-text inconsistency with diffusion models. More specifically, the author designed a pipeline that iteratively use diffusion models to edit the text and images in the image-text pairs to gradually optimize a mask that can point out where the inconsistency come from. This task is interesting and meaningful for misinformation detection, as it provides interpretable prediction results. To evaluate the proposed method, the authors collected a dataset containing image-text pairs and their inconsistency masks. Experiments shows that the proposed method outperforms baselines and gives explanable prediction on the inconsistency.
soundness: 3 good
presentation: 2 fair
contribution: 4 excellent
strengths: 1. The task studied in this paper is meaningful.
2. The dataset that they collected is contributive to the community.
3. The method is novel.
weaknesses: 1. The writing is not very good. I read the methodology part several hours to understand their pipeline.
2. The idea is well justified for the inconsistency of object alignment. But what if the predicate is not aligned, i.e. the person is correct but the action is not?
questions: How does the annotation and the model handles predicates?
flag_for_ethics_review: ['No ethics review needed.']
rating: 8: accept, good paper
confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
code_of_conduct: Yes |
Ny150AblPu | Exposing Text-Image Inconsistency Using Diffusion Models | [
"Mingzhen Huang",
"Shan Jia",
"Zhou Zhou",
"Yan Ju",
"Jialing Cai",
"Siwei Lyu"
] | In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation. | /pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf | BGAsoFzNFk | official_review | 1,698,757,618,691 | Ny150AblPu | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission5/Reviewer_LEKC"
] | summary: This paper presents D-TIIL for identifying and localizing inconsistencies between text and images.
A new dataset, TIIL, containing 14K consistent and inconsistent text-image pairs, is introduced for evaluating the method. The D-TIIL outperforms existing approaches in terms of Accuracy scores and demonstrates more explainable results. In a nutshell, the paper offers a scalable and evidence-based approach to identify and localize the text-image inconsistency. However, it also acknowledges the potential misuse of the method for creating deceptive text-image pairs and suggests improving the algorithm and restricting access.
soundness: 3 good
presentation: 2 fair
contribution: 2 fair
strengths: 1. Originality: The paper introduces a novel method, D-TIIL, that exposes text-image inconsistency with the location of inconsistent image regions and words. Also, the new TIIL dataset is the first dataset with pixel-level and word-level inconsistency features that provide fine-grained and reliable inconsistency.
2. Quality: The D-TIIL and TIIL dataset generation are thoroughly described. The paper also provides a comprehensive comparison of the proposed method with existing approaches.
3. Clarity: The paper is well-structured and clearly written. The method is explained in detail and the experiment results are presented in an understandable manner.
4. Significance: The D-TIIL method improves the accuracy of inconsistency detection and provides more explainable results. The introduction of the diffusion model makes it possible to align text and images in a latent and joint representation space to discount irrelevant information and incorporate broader knowledge.
weaknesses: 1. The paper acknowledges that the D-TIIL may struggle with inconsistencies with respect to specific external knowledge, and this could reduce the effectiveness of the method in real-world application.
2. The D-TIIL method relies heavily on the text-to-image diffusion models and benefits a lot from the semantic space that is already well aligned. This dependence could limit the generalizability of the proposed method.
3. There are some confusing details in the method description section.
4. In the comparison of methods, the reasons why D-TIIL is superior are not discussed and analyzed in detail, and the potential solutions for the failure cases are not provided.
5. More specific discussions and measures could be included to prevent potential abuse rather than simply restricting access.
questions: 1. Regarding Step 3 in Section 3 METHOD, the proposed E_{dnt} and descriptions like “include extra implicit information from the images and excludes additional implicit information that only appears in the text” raise doubts about the effectiveness of the process of the “text denoising”. Such “text denoising” seems to be too idealistic. In Section 5.4, for example, there is the failure case of the word "office". This leads to the bold suspicion that the D-TIIL method is only valid for simple objects, but not for backgrounds or objects that contain more complex semantics.
2. Also, the high dependency on the diffusion model affects the generalizability of the method. If text and image are not well aligned on the latent space, the validity of the method will be more affected. Semantic entanglement can also exist.
3. Regarding Step 4 in Section 3 METHOD, the descriptions like “We then compute the cosine similarity score between this image embedding and the input text embedding” are confusing to the readers.
4. In the Data Generation part of Section 4 TILL DATASET, T_{m} is unknown where it comes from, is it manually designed?
flag_for_ethics_review: ['No ethics review needed.']
rating: 5: marginally below the acceptance threshold
confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
code_of_conduct: Yes |
Ny150AblPu | Exposing Text-Image Inconsistency Using Diffusion Models | [
"Mingzhen Huang",
"Shan Jia",
"Zhou Zhou",
"Yan Ju",
"Jialing Cai",
"Siwei Lyu"
] | In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation. | /pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf | GRny6S4ecC | official_review | 1,698,745,076,380 | Ny150AblPu | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission5/Reviewer_4tYr"
] | summary: The authors propose D-TIIL (Diffusion-based Text-Image Inconsistency Localization), a system for automatically identifying and explaining text-image inconsistencies. D-TILL uses text-image diffusion models to locate semantic inconsistencies in text-image pairs. Diffusion models trained on large datasets filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies.
To evaluate the effectiveness of D-TIIL, the authors also introduce a new dataset (TIIL) with 14K consistent and inconsistent text-image pairs.
soundness: 2 fair
presentation: 3 good
contribution: 2 fair
strengths: • The paper is well written and well structured
• The problem and the related work are well introduced
• The framework is explained in detail
• The idea to build consistency scores between stable diffusion and the original image is interesting.
weaknesses: • The general theoretical idea behind the approach lacks clearity
• The real-world application is not very clear, e.g. wrong labels have a different type of mislabeling than just objects that are swapped
• Sensitivity to threshold highly influences M and the consistency score
With D-TIIL, the authors have presented an interesting method for using diffusion models to evaluate the consistency of image-text pairs.
However, the utility of the method is not fully evaluated in detail. Deeper insights into why this approach works are lacking. In addition, it would be nice to see how the approach works on other datasets where the labeling is just mixed up or misleading.
In addition, I would recommend for ICLR to investigate the method in more detail in terms of learned representations.
The paper is well written and has some interesting ideas, e.g. the usage of diffusion models for detecting image-text inconsistency. The method and the dataset, both are valuable. However, to be accepted in ICLR I would expect more and deeper investigations about the method and the dataset. What is learned, what are short comings?
There are some doubts, such that the model could be sensitive to the DALE generated part instead being sensitive to the text-image inconsistency. Experiments are missing that evaluate the underlying behavior. Moreover, a second evaluation on another dataset with more established baselines would be preferable to proof some of the assumptions, advantages and shortcomings of the method.
questions: • How does the approach perform on completely wrong image descriptions?
o Is the whole image masked?
• Is the model sensitive to the image part generated by DALE and not to the parts which do not correspondent to the text?
o Is there an experiment that can proof that?
o Maybe regenerate the image for the dataset also with the right semantic class?
• Is there another dataset where the method could be compared also to other baselines?
flag_for_ethics_review: ['No ethics review needed.']
rating: 5: marginally below the acceptance threshold
confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
code_of_conduct: Yes |
Ny150AblPu | Exposing Text-Image Inconsistency Using Diffusion Models | [
"Mingzhen Huang",
"Shan Jia",
"Zhou Zhou",
"Yan Ju",
"Jialing Cai",
"Siwei Lyu"
] | In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation. | /pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf | wVFaJBiRpf | official_review | 1,698,487,488,683 | Ny150AblPu | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission5/Reviewer_D7Cn"
] | summary: This paper develops a new method, D-TIIL, to expose text-image inconsistency with the location of inconsistent image regions and words, which is quite commonly happening in T2I generation diffusion models. To achieve this, they introduce a new dataset, TIIL, for evaluating text-image inconsistency localization with pixel-level and word-level inconsistency annotations.
soundness: 3 good
presentation: 2 fair
contribution: 3 good
strengths: 1. The dataset's contribution is commendable. Existing datasets lack the capacity to furnish evidence regarding inconsistencies occurring at both the image region and word levels, which is essential for evaluating D-TIIL (Diffusion-Based Text-to-Image Inconsistency Localization).
2. The problem addressed in this research is of significant importance. Previous methods have primarily focused on determining the presence of inconsistencies, whereas this paper introduces a novel approach to pinpointing the specific locations where these inconsistencies occur.
weaknesses: 1. It would be valuable to explore whether this method could be extended to evaluate other text-to-image (T2I) augmentation techniques (i.e., [1-3]). Given the abundance of research on generating images based on textual prompts, applying this method for evaluation purposes could have a broader impact and contribute significantly to the field.
2. Are there alternative evaluation metrics to assess the correspondence between text and images? Based on my experience with CLIP scores, it may not consistently capture performance accurately in various scenarios.
[1] Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models. SIGGRAPH 2023
[2] Improving Sample Quality of Diffusion Models Using Self-Attention Guidance. ICCV 2023
[3] Expressive Text-to-Image Generation with Rich Text. ICCV 2023
questions: As mentioned in the above weakness, I would appreciate seeing the proposed method applied more extensively in evaluation. The inclusion of evaluation metrics beyond CLIP scores could enhance the robustness and confidence of this paper.
flag_for_ethics_review: ['No ethics review needed.']
rating: 6: marginally above the acceptance threshold
confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
code_of_conduct: Yes |
Ny150AblPu | Exposing Text-Image Inconsistency Using Diffusion Models | [
"Mingzhen Huang",
"Shan Jia",
"Zhou Zhou",
"Yan Ju",
"Jialing Cai",
"Siwei Lyu"
] | In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation. | /pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf | pyMvEOzcit | official_comment | 1,700,067,931,038 | 5QZYQUEu7b | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission5/Authors"
] | title: Rebuttal by Authors
comment: **Q1. The writing needs to be improved.**
Thank you for your suggestion on the paper. We will improve the writing in the finalized version of this manuscript and pay special attention to the methodology part to make it more readable.
**Q2. How does the annotation and the model handle predicates inconsistencies?**
This is a good point. Our method can well handle inconsistencies in not only objects but also predicates and adjectives. We have added examples in the Appendix Fig. 13 of the revised manuscript. In terms of dataset construction, the annotators are instructed to select and edit object-term pairs from image and text, which include editing in objects, scenes, and attributes based on predicates or adjectives. For example, a singing man is edited with the prompt “a man playing basketball”, and a yellow cat can be changed with the prompt “a red cat”. |
Ny150AblPu | Exposing Text-Image Inconsistency Using Diffusion Models | [
"Mingzhen Huang",
"Shan Jia",
"Zhou Zhou",
"Yan Ju",
"Jialing Cai",
"Siwei Lyu"
] | In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation. | /pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf | uvnpXqbDbN | official_comment | 1,700,068,404,868 | BGAsoFzNFk | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission5/Authors"
] | title: Rebuttal by Authors
comment: **Q1. D-TIIL may struggle with inconsistencies with respect to specific external knowledge which could reduce the effectiveness of the method in real-world application.**
With the rapid development of text-to-image (T2I) diffusion models, our D-TIIL would be more generalizable to handle different types of real-world cases, such as using domain-specific diffusion models as mentioned in the Conclusion or more powerful diffusion models like the Stable Diffusion XL.
**Q2. The dependence with diffusion model could limit the generalizability of the proposed method, especially when text and image are not well aligned on the latent space.**
Our method leverages the semantic alignment performance of T2I diffusion models, but would fail for complex cases that current diffusion models cannot handle. However, our method is not restricted to a specific T2I diffusion model and can be implemented and adapted to the most recent diffusion models to enhance its generalizability further. In terms of the generalizability to the capacity for identifying various degrees of inconsistency in text-image pairs, our current method can address not only subtle inconsistencies with local manipulations but also whole swapping and mix-ups. We provided some examples of such scenarios in Fig. 13 (a) and (c) of the revised manuscript.
**Q3. There are some confusing details in the method description section.**
Thanks for pointing this out. We will improve the writing of the methodology part in the finalized version of the paper.
**Q4. The reasons why D-TIIL is superior are not discussed and analyzed in detail, and the potential solutions for the failure cases are not provided.**
Our D-TIIL outperforms baselines in comparison experiments due to the following two reasons. Unlike the baseline DetCLIP using object segmentation to compare CLIP similarity or GAE using attention heatmap for CLIP, our method leverages the T2I diffusion models to learn the semantic connections between textual and visual information. More importantly, instead of directly comparing the text and image embeddings, D-TIIL relies on two-step alignment to iteratively exclude irrelevant information and obtain knowledge-shared representations from two modalities and then directly exposes the inconsistencies.
The potential solutions for failure cases involve using more powerful diffusion models such as the most recent T2I diffusion models or domain-specific diffusion models, as discussed in the Conclusion section of the submission.
**Q5. More specific discussions and measures could be included to prevent potential abuse rather than simply restricting access.**
We will release our code as open-source with the condition that it “must not distribute harmful, offensive, dehumanizing content or otherwise harmful representations of people or their environments, cultures, religions, etc. produced with the model weights”.
**Q6. The term “text denoising” is too idealistic and D-TIIL may not localize the inconsistency that backgrounds or objects that contain more complex semantics.**
Thanks for the suggestion, we will consider using the term “text alignment” other than “text denoising”. This process actually reduces the distance between the image and text semantic space as shown in the "Analysis of the learned representation" section of the Appendix in the revised manuscript.
Our method can handle inconsistencies in not only simple objects, but also the backgrounds, and attributes of objects. We have included such examples in Fig. 13 (c) of the Appendix.
**Q7. The descriptions of Step 4 in Method is not clear.**
Thanks for pointing this out. We have modified the statement as follows: “We then compute the cosine similarity score between the CLIP image embedding of the masked image and the input text embedding E_0 as the consistency score.”.
**Q8. IN Section 4 DATASET, T_{m} is unknown.**
T_{m} is the alerted text based on human annotations. We have clarified this in Section 4. Thanks for pointing this out. |
Ny150AblPu | Exposing Text-Image Inconsistency Using Diffusion Models | [
"Mingzhen Huang",
"Shan Jia",
"Zhou Zhou",
"Yan Ju",
"Jialing Cai",
"Siwei Lyu"
] | In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation. | /pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf | EQET3bWEDL | official_comment | 1,700,068,853,067 | GRny6S4ecC | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission5/Authors"
] | title: Rebuttal by Authors
comment: **Q1. The general theoretical idea behind the approach lacks clearity.**
Thank you for your suggestion on the paper. We will improve the writing in the methodology part to make it more readable. The theoretical idea behind the approach is to employ the text-to-image diffusion models as “omniscient” agents to align the image-text latent space so that our proposed two alignment steps are able to filter out irrelevant information and incorporate background knowledge to identify inconsistencies.
**Q2. The real-world application is not very clear.**
We describe the problem that this paper solves in the Introduction. Our method aims to expose misinformation on social media and the Internet created by juxtaposing images with texts that do not accurately reflect the image’s original meaning or intention.
**Q3. Sensitivity to mask threshold**.
We agree that the mask threshold influences the performance. Therefore, our method designs a sample-specific threshold based on the average values among the mask instead of using a fixed threshold for all samples. In Table 10 and Figure 12 of the Appendix, we conducted a comparison with four fixed threshold strategies to show the effectiveness of our mask threshold setting method.
**Q4. Deeper insights into why this approach works are lacking.**
Understanding the semantic connection between text and image is the key to solving the text-image inconsistency problem, and our method is the first to explore generative AI models for this purpose. Our results have revealed two aspects that are not known previously in the literature. First, we use the large-scale text-to-image diffusion models as a foundation model with extensive background knowledge to effectively align the text and image semantics. Second, we design two alignment steps to iteratively align the image/text (embeddings) and filter out relevant semantic information with diffusion models. Thus, we obtain knowledge-shared text embeddings from both the input image and text, making it easier to identify semantic inconsistencies.
**Q5. Lack investigate the method in more detail in terms of learned representations.**
We have included the investigation of the learned representations in the revised manuscript of paragraph “Analysis of the learned representation” in the Appendix.
**Q6. How the approach works on other datasets,is there another dataset where the method could be compared?**
To the best of our knowledge, the proposed TIIL dataset is the first text-image inconsistency localization dataset with pixel-level and word-level inconsistency annotations. The proposed TIIL dataset is based on content from diverse real-world news stories and a wide range of modifications in inconsistent regions, including local inconsistencies and global swapping based on mixed-up pairs.
**Q7. Could the model be sensitive to the DALLE generated part instead being sensitive to the text-image inconsistency?**
Our method is not sensitive to the part that DALL-E generated. As shown in the Appendix, we compared the performance of our method with baselines on Image-manipulated and Text-manipulated subsets in Tables 8 and 9. The results show the effectiveness and superiority of our method in both DALL-E generated inconsistent samples and non-DALL-E manipulated samples.
**Q8 What are the shortcomings of proposed methods?**
As discussed in Section 5.4 on failure cases, one shortcoming of our method is that given the limited prior knowledge of the diffusion model we used, our model may not effectively handle the inconsistencies with respect to specific external knowledge.
**Q9. How does the approach perform on completely wrong image descriptions?**
We have added examples with completely wrong image descriptions from our TIIL dataset in the Appendix Fig.13 (a) of the revised manuscript. Our method can provide a whole image mask for this kind of inconsistency samples. |
Ny150AblPu | Exposing Text-Image Inconsistency Using Diffusion Models | [
"Mingzhen Huang",
"Shan Jia",
"Zhou Zhou",
"Yan Ju",
"Jialing Cai",
"Siwei Lyu"
] | In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation. | /pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf | QWaXWyAYcF | official_comment | 1,700,068,966,207 | wVFaJBiRpf | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission5/Authors"
] | title: Rebuttal by Authors
comment: **Q1. Could this method be extended to evaluate other text-to-image (T2I) augmentation techniques?**
Thanks for the suggestion. Running these experiments takes time so we cannot do it within the period of ICLR review. Technically, our method is not restricted to a specific text-to-image diffusion model, and can be extended to other most recent diffusion models and their variants.
**Q2. Are there alternative evaluation metrics to assess the correspondence between text and images other than CLIP scores?**
Following existing text-image correspondence evaluation methods [1-4], we used the CLIP score to assess the correspondence between text and images. It is worth noting that the CLIP score is only used to analyze the inconsistency levels of our dataset in Table 1 and evaluate the text embedding initialization effectiveness in Table 7, it might work well on our dataset as most of the inconsistencies in our dataset are in the object-level but not scenarios-level. We did not use the CLIP score in the text-image inconsistency localization pipeline, which will not influence the localization performance.
[1] Meng, Chenlin, et al. "On distillation of guided diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[2] Saharia, Chitwan, et al. "Photorealistic text-to-image diffusion models with deep language understanding." Advances in Neural Information Processing Systems 35 (2022): 36479-36494.
[3] Couairon, Guillaume, et al. "Diffedit: Diffusion-based semantic image editing with mask guidance." ICLR. 2023.
[4] Blattmann, Andreas, et al. "Retrieval-augmented diffusion models." Advances in Neural Information Processing Systems 35 (2022): 15309-15324. |
Ny150AblPu | Exposing Text-Image Inconsistency Using Diffusion Models | [
"Mingzhen Huang",
"Shan Jia",
"Zhou Zhou",
"Yan Ju",
"Jialing Cai",
"Siwei Lyu"
] | In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation. | /pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf | e2IKbmRt5J | official_comment | 1,700,071,412,737 | Ny150AblPu | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission5/Authors"
] | title: General Comments
comment: We thank all the reviewers for their time, insightful suggestions, and valuable comments. We are grateful for the positive recognition of the reviewers that our idea and task are interesting and meaningful (Reviewers qTYR, 4tYr, and D7Cn), the method is novel and provides interpretable evidence (Reviewers qTYR and LEKC), the paper is well written (Reviewers LEKC and 4tYr), and our dataset is contributive to the community (Reviewers qTYR and D7Cn).
We have responded to each reviewer's comments in detail below. A revised version of the manuscript has been uploaded. We hope our response and rebuttal revision will address the reviewers' concerns. |
Ny150AblPu | Exposing Text-Image Inconsistency Using Diffusion Models | [
"Mingzhen Huang",
"Shan Jia",
"Zhou Zhou",
"Yan Ju",
"Jialing Cai",
"Siwei Lyu"
] | In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation. | /pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf | q0kYsKXAbH | official_comment | 1,700,638,980,203 | uvnpXqbDbN | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission5/Reviewer_LEKC"
] | title: Post-rebuttal comments.
comment: Dear Authors,
Thank you for your detailed response to my comment.
Your explanation enhances my understanding of your work.
However, I would like to emphasize that my primary concern remains regarding the limitations of the approach.
Specifically, the effectiveness of the approach seems heavily dependent on the well-trained T2I diffusion models, where the textual and semantic spaces are already well aligned. For diffusion models that are not as well-trained, the approach will not work. This dependency may limit its applicability and generalizability, thus diminishing the contribution of your work.
--Reviewer LEKC |
Ny150AblPu | Exposing Text-Image Inconsistency Using Diffusion Models | [
"Mingzhen Huang",
"Shan Jia",
"Zhou Zhou",
"Yan Ju",
"Jialing Cai",
"Siwei Lyu"
] | In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation. | /pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf | M6zZ6Gy9YA | official_comment | 1,700,674,736,673 | q0kYsKXAbH | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission5/Authors"
] | title: Official Comment by Authors
comment: Thank you for your comments. We agree with the reviewer that using diffusion models which are not well-trained would limit the applicability of our method. Therefore, we build our method upon the well-trained diffusion models, for example, Stable Diffusion, that was trained on pairs of images and captions taken from LAION-5B with 5 billion image-text pairs. The semantic space alignment of Stable Diffusion has been shown effective in a variety of tasks, such as text-guided image editing [1,2], image-text matching [3], and image captioning [4]. As such, the generalizability of the proposed method would benefit from the broad knowledge and semantic alignment of diffusion models. The training of diffusion models and enhancement of semantic alignment ability of diffusion models is an interesting and different topic which many researchers are working on. Our method takes well-trained diffusion models as foundation models, and could always be updated to a diffusion model that has better alignment for better generalizability.
---
Reference:
[1] Couairon, Guillaume, et al. “Diffedit: Diffusion-based semantic image editing with mask guidance.” ICLR.
2023.
[2] Mokady, Ron, et al. “Null-text inversion for editing real images using guided diffusion models.” CVPR, 2023.
[3] Krojer, Benno, et al. “Are diffusion models vision-and-language reasoners?.” NeurIPS, 2023.
[4] Xiao, Changrong, Sean Xin Xu, and Kunpeng Zhang. “Multimodal Data Augmentation for Image
Captioning using Diffusion Models.” arXiv preprint arXiv:2305.01855 (2023). |
Ny150AblPu | Exposing Text-Image Inconsistency Using Diffusion Models | [
"Mingzhen Huang",
"Shan Jia",
"Zhou Zhou",
"Yan Ju",
"Jialing Cai",
"Siwei Lyu"
] | In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation. | /pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf | qfw15Mm4Vp | meta_review | 1,701,809,436,090 | Ny150AblPu | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission5/Area_Chair_DiDR"
] | metareview: The reviewers appreciate the method, dataset and writing (with some exceptions). While there are some questions, none seem critical, and the authors attempted to address them.
justification_for_why_not_higher_score: Only one 8, and from a very brief review
justification_for_why_not_lower_score: Many scores are borderline, but list numerous meaningful strengths |
Ny150AblPu | Exposing Text-Image Inconsistency Using Diffusion Models | [
"Mingzhen Huang",
"Shan Jia",
"Zhou Zhou",
"Yan Ju",
"Jialing Cai",
"Siwei Lyu"
] | In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation. | /pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf | xNr56HTD2t | decision | 1,705,405,927,131 | Ny150AblPu | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (poster) |
1bbPQShCT2 | I-PHYRE: Interactive Physical Reasoning | [
"Shiqian Li",
"Kewen Wu",
"Chi Zhang",
"Yixin Zhu"
] | Current evaluation protocols predominantly assess physical reasoning in stationary scenes, creating a gap in evaluating agents' abilities to interact with dynamic events. While contemporary methods allow agents to modify initial scene configurations and observe consequences, they lack the capability to interact with events in real time. To address this, we introduce I-PHYRE, a framework that challenges agents to simultaneously exhibit intuitive physical reasoning, multi-step planning, and in-situ intervention. Here, intuitive physical reasoning refers to a quick, approximate understanding of physics to address complex problems; multi-step denotes the need for extensive sequence planning in I-PHYRE, considering each intervention can significantly alter subsequent choices; and in-situ implies the necessity for timely object manipulation within a scene, where minor timing deviations can result in task failure. We formulate four game splits to scrutinize agents' learning and generalization of essential principles of interactive physical reasoning, fostering learning through interaction with representative scenarios. Our exploration involves three planning strategies and examines several supervised and reinforcement agents' zero-shot generalization proficiency on I-PHYRE. The outcomes highlight a notable gap between existing learning algorithms and human performance, emphasizing the imperative for more research in enhancing agents with interactive physical reasoning capabilities. The environment and baselines will be made publicly available. | /pdf/fad4695ed5caf629961f820cfffbd439e4662aa5.pdf | WJMxD7W7FR | official_review | 1,699,124,044,115 | 1bbPQShCT2 | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission9/Reviewer_BYPY"
] | summary: This paper introduces I-PHYRE, a benchmark for intuitive physical reasoning capabilities in decision-making agents. It consists of four different block-removal games and benchmarks three planning strategies against them, implemented with both supervised and reinforcement learning. I-PHYRE's design centers on three principles: physical reasoning, multi-step planning, and in-situ intervention.
The four games are "basic" (teaching basic principles of physics), "noisy" (minor perturbations), "compositional" (combining various structures to require multi-step reasoning), and "multi-ball" (multiple dynamic events occurring concurrently, motivating carefully timed in-situ intervention). The planning strategies are "planning-in-advance" (generating an entire plan with timings based only on the initial state), "planning-on-the-fly" (generating the next action at each timestep given the observation - standard RL-type setup), and "combined" (generating the entire plan, then updating it after executing the first action).
Experiments include results from model-free deep RL agents as well as some other baselines in supplementary on the games. Everything is compared to a human baseline. The paper specifically discusses how agents perform when generalizing (without further training) from the basic split to other splits, as well as how the three training strategies differ.
Finally, the paper gives analysis of sources of difficulty in I-PHYRE, performance by offline algorithms, and limitations/future work.
soundness: 3 good
presentation: 2 fair
contribution: 3 good
strengths: #### Quality
- Overall, a solid paper. Intuitively, if an agent did well on I-PHYRE, I would believe that it had robust intuitive physics capabilities in certain domains, which is what a benchmark should convince me of.
- Appendix G is very valuable. It does rest on the assumption that all failures are either of bad order or bad timing, which eliminates more basic failures, but since it is a simple simulator, and since Appendix G clearly shows that all the failures in Basic are timing-based (i.e. all the failed plans have correct order), I am convinced that the Basic split teaches nontrivial principles and the other splits
#### Clarity
- Very well-written! Clear language
- The text organization is really useful, especially in the results section. The subsections make sense and each paragraph follows a claim-warrant structure. In terms of readability, papers often fall apart in the results section; this one does not.
#### Originality
I-PHYRE seems to have an original design. However, it's not the first interactive intuitive physics benchmark. The paper should compare to [1] and [2], though I do think it serves different, valuable purposes.
#### Significance
The tasks in I-PHYRE are distinct from other related work and test compelling aspects of intuitive physics reasoning. So, I think if the paper can prove its claims, it is significant.
[1] Physion: Evaluating Physical Prediction from Vision in Humans and Machines. Bear, D. M. et al. arxiv preprint: arXiv:2106.08261. 2021.
[2] Jain, A. et al. "Generalization to new actions in reinforcement learning." ICML 2020.
weaknesses: #### Quality
- The paper says that RL agents perform well on the noisy split, sort of justified by the fact that their noisy results "correlate" with their basic results. But that doesn't follow - the noisy results are clearly of lower magnitude even if they follow the same trends (across what factor of variation?), and we aren't given a correlation statistic to warrant this claim. The same applies to "correlation diminishes in the compositional and multi-ball splits... inherent complexities of these tasks impact performance negatively." The correlation claim is similarly 1) nonobvious from looking at Fig 2, even if it does sort of look true, 2) not backed up by a number, 3) not clear why it matters - even if the performance on these two were correlated with perf on basic across whatever factor of variation, but lower in magnitude, I would accept that they are harder (and I of course do, just from looking at Fig 2). Then making a claim that this difficulty is due to the "inherent complexities of these tasks" is, while not unbelievable, strong. I'm not doubtful, just not convinced.
- The discussion has a section titled "why do current RL agents fail on I-PHYRE?", which in my opinion is the most important aspect of a benchmark paper other than conceptual design and grounding in the environment. The paper claims three benefits: physics modeling being hard, multi-step interventions, and action timing - i.e. asserting that its design principles have effectively resulted in challenges for agents. However, this is all prose and little analysis of actual results - it would help to spell out for the reader which quantitative result comparison should lead to each conclusion. (Appendix G does a good job of this).
#### Clarity
- Figure 1 (repeated from a previous review I did of this paper, as the figure hasn't changed):
- Colors are hard to follow, maybe better to annotate split on box
- Since the boxes are much larger than the arrows, sort of look like two columns, and generally don't look like flowchart elements, it's confusing that the top-left box isn't the best one to start reading with. Bigger arrows and/or numbering would help.
- "Wrong order leads to no overlap elimination timing for two balls" - I don't understand this
- Compositional solution isn't that clear - annotation of key occurrences in each step might help
- "Combined" planning strategy needs better explanation (a step-by-step could help) - it sounds like the whole plan is generated based on the initial state, then the first action is executed, then the plan is updated, and that's it. But I'm not totally sure if the entire plan is updated, if it's ever updated again later after subsequent actions, etc.
- Figure 2 needs to be more organized and readable (in the previous version it was also hard to follow, but still more organized)
- Strategies should be put into separate subplots (with the same axes) or otherwise cued, especially since there are different numbers of each, making it hard to eyeball
- Baselines (especially the human comparison point) can be horizontal lines crossing the figure
- Not every algorithm gets every planning strategy, which doesn't seem to *only* be a function of the nature of the algorithm - so it's confusing to take all of this in just in the form of bars and text
- Fig 3 would benefit from organization as well - e.g. group strategies by color and differentiate within them by line texture.
#### Originality
- The paper claims the three planning strategies as a contribution, but they aren't original - they are baselines just like the offline methods tested on I-PHYRE. I agree that the *results* are a contribution, but I would fold that into the third contribution bullet, or at least make it clear that the "devised" planning strategies themselves shouldn't be considered original.
Nits:
- Different parts of the paper use different naming conventions for agents - e.g. "SAC-I" in one place, "SAC Inadvance" in another. Better to use the same thing throughout.
questions: - In a previous version of the paper, failure on I-PHYRE was ascribed to sparse action requirements and delayed reward. Now, those concepts are being pitched as being inherent to multi-step reasoning (delayed reward) and action timing (sparse action requirements), but it's not true that failing for those reasons means that if the agents were better at handling them, they would robustly learn to handle multi-step reasoning problems and in-situ intervention problems. Could you flesh this argument out more?
- What exactly is the nature of the "combined" strategy, and could you say more on why it's effective?
flag_for_ethics_review: ['No ethics review needed.']
rating: 6: marginally above the acceptance threshold
confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
code_of_conduct: Yes |
1bbPQShCT2 | I-PHYRE: Interactive Physical Reasoning | [
"Shiqian Li",
"Kewen Wu",
"Chi Zhang",
"Yixin Zhu"
] | Current evaluation protocols predominantly assess physical reasoning in stationary scenes, creating a gap in evaluating agents' abilities to interact with dynamic events. While contemporary methods allow agents to modify initial scene configurations and observe consequences, they lack the capability to interact with events in real time. To address this, we introduce I-PHYRE, a framework that challenges agents to simultaneously exhibit intuitive physical reasoning, multi-step planning, and in-situ intervention. Here, intuitive physical reasoning refers to a quick, approximate understanding of physics to address complex problems; multi-step denotes the need for extensive sequence planning in I-PHYRE, considering each intervention can significantly alter subsequent choices; and in-situ implies the necessity for timely object manipulation within a scene, where minor timing deviations can result in task failure. We formulate four game splits to scrutinize agents' learning and generalization of essential principles of interactive physical reasoning, fostering learning through interaction with representative scenarios. Our exploration involves three planning strategies and examines several supervised and reinforcement agents' zero-shot generalization proficiency on I-PHYRE. The outcomes highlight a notable gap between existing learning algorithms and human performance, emphasizing the imperative for more research in enhancing agents with interactive physical reasoning capabilities. The environment and baselines will be made publicly available. | /pdf/fad4695ed5caf629961f820cfffbd439e4662aa5.pdf | yxihvLc5Mt | official_review | 1,698,774,516,487 | 1bbPQShCT2 | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission9/Reviewer_yzoQ"
] | summary: This paper proposes I-PHYRE, an interactive physical reasoning benchmark. While previous physical reasoning benchmarks mainly focus on reasoning happening in stationary scenes, I-PHYRE tests physical reasoning in an interactive format. This demands the agent to quickly understand the underlying physics, plan over multi-steps, and perform timely manipulation within a scene. The authors formulated I-PHYRE into four game splits to measure the agents' capability to learn and generalize about essential principles of physics, and
conducted extensive evaluations of existing learning algorithms and human performance. They showed a significant gap between human and learning algorithms, and also analyzed what are the main factors of current learning methods fails.
soundness: 3 good
presentation: 3 good
contribution: 3 good
strengths: 1. Clear motivation and novel design achieving the motivation.
- The authors clearly stated I-PHYRE’s contribution compared to other existing benchmarks.
- The proposed task of block elimination is well designed for the stated purpose, highlighting interactivity of physical reasoning.
- Additionally, dividing the tasks into 4 types for testing different types of generalization is well designed.
2. Thorough experiments and analysis
- The experiments are very thorough, including multiple reinforcement learning baselines, multiple planning strategies, learning from offline data, integration with large language model(LLM), human evaluation, etc.
- The authors also tried to analyze what makes learning algorithms difficult to generalize, and came up with three plausible factors.
- Additionally, the authors also quantitatively analyzed how significant action timing influences the performance of the algorithms.
weaknesses: 1. The visualization of Figure 1. is slightly hard to understand at first glance. Although the authors explain the figure more thoroughly in page 3, it would be better to either separate the figure into several images or add another image that describes the task more briefly.
2. Although the experiment was thorough, it can be that 40 games is a small number of games to anticipate strong generalization. It would be better if the authors have tested learning methods in a larger scale. For instance, the authors can try more diverse configuration given the same basic physics to model in the scene.
3. The cited work [1] also contains interactivity in the benchmark, although it mainly tackles generalization to new actions as the authors mentioned.
[1] Generalization to New Actions in Reinforcement Learning, International Conference on Machine Learning, 2020, Jain et al.
questions: 1. I wonder why the authors simply concatenated the predicted states with the current states when performing model based reinforcement learning. A more dominant approach to use would be
1. Along with the dynamics model, also train a reward model that predicts the reward given a state.
2. Given a reward model and a dynamics model, perform planning (e.g. CEM planning)
2. There are also other recent model based RL methods the authors can try, such as [1].
[1] Temporal Difference Learning for Model Predictive Control, International Conference on Machine Learning, 2023, Hansen et al.
flag_for_ethics_review: ['No ethics review needed.']
rating: 8: accept, good paper
confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
code_of_conduct: Yes |
1bbPQShCT2 | I-PHYRE: Interactive Physical Reasoning | [
"Shiqian Li",
"Kewen Wu",
"Chi Zhang",
"Yixin Zhu"
] | Current evaluation protocols predominantly assess physical reasoning in stationary scenes, creating a gap in evaluating agents' abilities to interact with dynamic events. While contemporary methods allow agents to modify initial scene configurations and observe consequences, they lack the capability to interact with events in real time. To address this, we introduce I-PHYRE, a framework that challenges agents to simultaneously exhibit intuitive physical reasoning, multi-step planning, and in-situ intervention. Here, intuitive physical reasoning refers to a quick, approximate understanding of physics to address complex problems; multi-step denotes the need for extensive sequence planning in I-PHYRE, considering each intervention can significantly alter subsequent choices; and in-situ implies the necessity for timely object manipulation within a scene, where minor timing deviations can result in task failure. We formulate four game splits to scrutinize agents' learning and generalization of essential principles of interactive physical reasoning, fostering learning through interaction with representative scenarios. Our exploration involves three planning strategies and examines several supervised and reinforcement agents' zero-shot generalization proficiency on I-PHYRE. The outcomes highlight a notable gap between existing learning algorithms and human performance, emphasizing the imperative for more research in enhancing agents with interactive physical reasoning capabilities. The environment and baselines will be made publicly available. | /pdf/fad4695ed5caf629961f820cfffbd439e4662aa5.pdf | 64QGA8W94P | official_review | 1,698,617,832,315 | 1bbPQShCT2 | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission9/Reviewer_p9BE"
] | summary: The paper presents a series of intricate tasks centered on physical reasoning and interaction. Additionally, it suggests three distinct approaches for task resolution using supervised or reinforcement learning methods. To conclude, an extensive user study is executed to gauge human proficiency across various learning techniques.
soundness: 3 good
presentation: 3 good
contribution: 2 fair
strengths: The study introduces a valuable benchmark for assessing model predictions concerning physical outcomes. The benchmark's interactive nature facilitates real-time planning, crucial for real-time physics interactions, particularly when timing plays a key role in the dynamics. Furthermore, it supports multi-step interventions, promoting long-term predictions over brief, single-step action forecasts.
weaknesses: 1. Please add a table grid to Figure 2 for clearer comparisons.
2. The author notes the absence of 3D interactive environments as a limitation. However, this is a significant point to address since many high-performing models should ideally transfer seamlessly to robotics. A 3D interactive environment would greatly facilitate this transition. Notably, papers like ComPhy feature 3D environments, as referenced in Table 1 by the author.
3. I appreciate the author's use of this paper as a benchmark for various RL approaches, highlighting the need for more research in this domain to address physical reasoning tasks. What potential solutions or recommendations does the author suggest for this benchmark?
questions: Mentioned in weakness
flag_for_ethics_review: ['No ethics review needed.']
details_of_ethics_concerns: No
rating: 6: marginally above the acceptance threshold
confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
code_of_conduct: Yes |
1bbPQShCT2 | I-PHYRE: Interactive Physical Reasoning | [
"Shiqian Li",
"Kewen Wu",
"Chi Zhang",
"Yixin Zhu"
] | Current evaluation protocols predominantly assess physical reasoning in stationary scenes, creating a gap in evaluating agents' abilities to interact with dynamic events. While contemporary methods allow agents to modify initial scene configurations and observe consequences, they lack the capability to interact with events in real time. To address this, we introduce I-PHYRE, a framework that challenges agents to simultaneously exhibit intuitive physical reasoning, multi-step planning, and in-situ intervention. Here, intuitive physical reasoning refers to a quick, approximate understanding of physics to address complex problems; multi-step denotes the need for extensive sequence planning in I-PHYRE, considering each intervention can significantly alter subsequent choices; and in-situ implies the necessity for timely object manipulation within a scene, where minor timing deviations can result in task failure. We formulate four game splits to scrutinize agents' learning and generalization of essential principles of interactive physical reasoning, fostering learning through interaction with representative scenarios. Our exploration involves three planning strategies and examines several supervised and reinforcement agents' zero-shot generalization proficiency on I-PHYRE. The outcomes highlight a notable gap between existing learning algorithms and human performance, emphasizing the imperative for more research in enhancing agents with interactive physical reasoning capabilities. The environment and baselines will be made publicly available. | /pdf/fad4695ed5caf629961f820cfffbd439e4662aa5.pdf | vonPzoQUCQ | official_comment | 1,700,237,195,949 | Vc3g1LmXdg | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission9/Authors"
] | title: Response to Reviewer PPjJ
comment: We thank Reviewer PPjJ for constructive feedback. Below, we answer questions raised by PPjJ and point out several misunderstandings.
> Generalize to unseen games from the basic split.
We further created another 10 games similar to the basic games by varying the object positions and angles. Agent's generalization performance is shown in the following table. Overall, the performance in these new games is slightly inferior to the original basic games but better than noisy games.
| **Agent** | Random | A2C-I | PPO-I | SAC-I | A2C-C | PPO-C | SAC-C | A2C-O | PPO-O | SAC-O |
|-|-|-|-|-|-|-|-|-|-|-|
| **Reward** | 360.17 | 862.25 | 763.13 | 462.48 | 662.72 | 660.51 | 363.60 | 561.75 | 662.12 | 461.39 |
> The training dataset of 20 games for basic split seems too small a dataset to lead to any meaningful generalization to complex splits.
We choose not to scale up I-PHYRE for the following reasons.
1. **Prior evidence does not support large-scale learning approach**: As experimental results from the original PHYRE show, existing methods that excel in within-template tasks still fall short in cross-template settings, despite the fact that the agent learns from a vast amount of variations/data.
2. **Minimal yet complete system for probing the boundary of intuitive physics models**: Indeed, one can scale up by introducing stochasticity into the game dynamics. However, naive scaling risks creating unsolvable games, given that the games demand precise timing. Critically, one of the central goals of this paper is to probe the boundary of the intuitive physics models. We argue the presented simulator is already a minimal yet complete system to achieve this goal.
3. **Learning generalizable and compositional physics**: Given the generalization results presented in works like PHYRE, we set our primary goal in I-PHYRE to be agents that can generalize to unseen scenarios from **learning physical primitives and composing them** with well-designed modeling rather than merely data-driven learning. This paradigm has been justified in other work, such as the [schema network (ICML 17')](https://arxiv.org/pdf/1706.04317.pdf), where the model learns from only a single setup and becomes generalizable to a wide range of variations.
> The success rate of these baselines on full task.
Thank you for pointing it out. We list the success rates of the methods below and will update this table in revision.
| **Agent** | Human | Random | DDPG-I | DQN-O | A2C-I | A2C-O | A2C-C |
|-|-|-|-|-|-|-|-|
| **Success Rate** | 87.55% | 30.00% | 25.00% | 45.00% | 50.00% | 42.50% | 55.00% |
| **Agent** | PPO-I | PPO-O | PPO-C | SAC-I | SAC-O | SAC-C |
|-|-|-|-|--|--|-|
| **Success Rate** | 57.50% | 47.50% | 55.00% | 37.50% | 40.00% | 37.50% |
> Whether I-PHYRE expects emergence of compositional generalization by training on just the basic split games? If yes, have authors tried using recurrent policies?
We do believe that compositional and systematic generalization can emerge by learning from basic games, as evidenced and justified by [schema network (ICML 17')](https://arxiv.org/pdf/1706.04317.pdf), [MLC (Nature 21')](https://www.nature.com/articles/s41586-023-06668-3), and [RCN (Science 17')](https://www.science.org/doi/10.1126/science.aag2612). Note that we do not commit to either MLPs or recurrent networks but rather believe in careful model design.
To directly answer the reviewer's question, we indeed have tried recurrent networks to build a model-based RL agent. However, we observe no significant performance difference; see Appendix E. We conclude that the lack of physics modeling hinders learning helpful dynamics. We discuss potential ways of physics modeling in Appendix H. We will further discuss Lake and Baroni's work in revision.
> How is the combined strategy baseline implemented and what is the action space of this baseline?
The combined baseline predicts the execution time for all actions, waits and executes the earliest action, and updates the execution time. The action space per time step is the same as other strategies, which are all the blocks.
> Why are the combined strategy baseline trained only for < 100k step in Fig. 3 experiments?
The ones that have < 100k steps in Fig. 3 are SACs, including Inadvance, Combine, and Onthefly. This is because different RL algorithms are set with different numbers of **steps per iteration** implemented in Ray RLlib. SAC is set with 1 by default and A2C with 10 by default. However, the **number of iterations** remains the same across different algorithms.
> Have authors tried using a on-the-fly version of GPT-4 baseline presented in the appendix?
In our preliminary study, we tried basic games using GPT-4 with a planning on-the-fly strategy. The GPT-4 solves none of them and almost always returns no action. Oftentimes, it regards a new frame as no change from the previous one, as there are only limited changes in locations in between. |
1bbPQShCT2 | I-PHYRE: Interactive Physical Reasoning | [
"Shiqian Li",
"Kewen Wu",
"Chi Zhang",
"Yixin Zhu"
] | Current evaluation protocols predominantly assess physical reasoning in stationary scenes, creating a gap in evaluating agents' abilities to interact with dynamic events. While contemporary methods allow agents to modify initial scene configurations and observe consequences, they lack the capability to interact with events in real time. To address this, we introduce I-PHYRE, a framework that challenges agents to simultaneously exhibit intuitive physical reasoning, multi-step planning, and in-situ intervention. Here, intuitive physical reasoning refers to a quick, approximate understanding of physics to address complex problems; multi-step denotes the need for extensive sequence planning in I-PHYRE, considering each intervention can significantly alter subsequent choices; and in-situ implies the necessity for timely object manipulation within a scene, where minor timing deviations can result in task failure. We formulate four game splits to scrutinize agents' learning and generalization of essential principles of interactive physical reasoning, fostering learning through interaction with representative scenarios. Our exploration involves three planning strategies and examines several supervised and reinforcement agents' zero-shot generalization proficiency on I-PHYRE. The outcomes highlight a notable gap between existing learning algorithms and human performance, emphasizing the imperative for more research in enhancing agents with interactive physical reasoning capabilities. The environment and baselines will be made publicly available. | /pdf/fad4695ed5caf629961f820cfffbd439e4662aa5.pdf | FIBXDRgYVK | official_comment | 1,700,358,311,170 | WJMxD7W7FR | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission9/Authors"
] | title: Response to Reviewer BYPY
comment: We are glad that you treat our paper as solid with both good intuitive and empirical results and acknowledge the contribution and significance of I-PHYRE. We will discuss and compare our work with [1] and [2] in the revised version.
For your concerns:
> *Then making a claim that this difficulty is due to the "inherent complexities of these tasks" is, while not unbelievable, strong.*
We acknowledge the challenge in analyzing the innate complexities of games. Our research shows these complexities by highlighting the **sampling difficulty in identifying successful solutions** in Appendix B, providing an approximate estimation of game difficulties. We will clarify this point in the revised version.
> *Failure reasons*
We appreciate your comments! In the revised version, we will elaborate on the experimental results presented in Appendix G and strengthen our claims with empirical evidence and hypotheses.
> *On Figure 1*
Thank you for your suggestions. In response:
1. We used larger arrows to illustrate relationships and will enlarge them further if space permits.
2. We will add additional annotations and coloring.
3. We changed the caption to "Wrong order leads to missing elimination timing for two balls" and referred readers to our supplementary demo video for a detailed explanation.
> *"Combined" planning strategy needs better explanation.*
The plan updates after each action. For example, if the initial plan involves actions at times t1 and t2 (t1 < t2), the agent waits until t1, executes the action, and updates the plan for the following action.
> *On Figure 2*
We will revise Figure 2 to:
1. Separate the bars according to strategies.
2. Depict human results as horizontal lines for direct comparison.
> *On Figure 3*
We will organize Figure 3 into groups in the revised version.
> *Contribution statements*
We will incorporate your suggestions in the revised manuscript.
> *Different naming conventions for agents*
We will ensure consistency in agent naming throughout the paper.
> *Failure on I-PHYRE*
We view multi-step reasoning and action timing as **scientific** problems, while delayed reward and sparse action are **computational** challenges. The multi-step nature causes delayed rewards, and action timing implies sparsity. We will distinguish hypotheses from claims in the revised version.
> *Nature of the "combined" strategy*
The combined strategy, inspired by human thinking, maintains a global vector of pre-planned action timing, updating it intermittently. This approach balances between constant updates and pre-planning, re-planning only at key frames for computational efficiency and effectiveness. |
1bbPQShCT2 | I-PHYRE: Interactive Physical Reasoning | [
"Shiqian Li",
"Kewen Wu",
"Chi Zhang",
"Yixin Zhu"
] | Current evaluation protocols predominantly assess physical reasoning in stationary scenes, creating a gap in evaluating agents' abilities to interact with dynamic events. While contemporary methods allow agents to modify initial scene configurations and observe consequences, they lack the capability to interact with events in real time. To address this, we introduce I-PHYRE, a framework that challenges agents to simultaneously exhibit intuitive physical reasoning, multi-step planning, and in-situ intervention. Here, intuitive physical reasoning refers to a quick, approximate understanding of physics to address complex problems; multi-step denotes the need for extensive sequence planning in I-PHYRE, considering each intervention can significantly alter subsequent choices; and in-situ implies the necessity for timely object manipulation within a scene, where minor timing deviations can result in task failure. We formulate four game splits to scrutinize agents' learning and generalization of essential principles of interactive physical reasoning, fostering learning through interaction with representative scenarios. Our exploration involves three planning strategies and examines several supervised and reinforcement agents' zero-shot generalization proficiency on I-PHYRE. The outcomes highlight a notable gap between existing learning algorithms and human performance, emphasizing the imperative for more research in enhancing agents with interactive physical reasoning capabilities. The environment and baselines will be made publicly available. | /pdf/fad4695ed5caf629961f820cfffbd439e4662aa5.pdf | kX8Ermvnem | official_comment | 1,700,358,345,459 | yxihvLc5Mt | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission9/Authors"
] | title: Response to Reviewer yzoQ
comment: Thank you for appreciating our clear motivation, novel design, and thorough experiments and analysis.
> *The visualization of Figure 1. is slightly hard to understand at first glance.*
We acknowledge your feedback. In response, we will divide Figure 1 into two subfigures to more effectively illustrate the challenges of I-PHYRE and the creation of game splits.
> *Although the experiment was thorough, it can be that 40 games is a small number of games to anticipate strong generalization.*
We appreciate your concern. We limited the scale of I-PHYRE for several reasons:
1. **Prior evidence against large-scale learning**: Despite a large dataset, methods excelling in within-template tasks struggle in cross-template settings (as seen in PHYRE).
2. **Minimal yet complete system**: Scaling up can introduce unsolvable games due to the need for precise timing. Our goal is to probe the boundary of intuitive physics models, and the current simulator effectively serves this purpose.
3. **Learning generalizable and compositional physics**: We focus on generalization from learning physical primitives and composing them, as supported by research like the [schema network (ICML 17')](https://arxiv.org/pdf/1706.04317.pdf), which generalizes from a single setup to a range of variations.
> *Related work*
We will extend our discussion on related work in the revised manuscript.
> *More dominant model-based reinforcement learning.*
We will explore recent methods in model-based reinforcement learning and integrate them into our discussion, following the World Model paradigm. |
1bbPQShCT2 | I-PHYRE: Interactive Physical Reasoning | [
"Shiqian Li",
"Kewen Wu",
"Chi Zhang",
"Yixin Zhu"
] | Current evaluation protocols predominantly assess physical reasoning in stationary scenes, creating a gap in evaluating agents' abilities to interact with dynamic events. While contemporary methods allow agents to modify initial scene configurations and observe consequences, they lack the capability to interact with events in real time. To address this, we introduce I-PHYRE, a framework that challenges agents to simultaneously exhibit intuitive physical reasoning, multi-step planning, and in-situ intervention. Here, intuitive physical reasoning refers to a quick, approximate understanding of physics to address complex problems; multi-step denotes the need for extensive sequence planning in I-PHYRE, considering each intervention can significantly alter subsequent choices; and in-situ implies the necessity for timely object manipulation within a scene, where minor timing deviations can result in task failure. We formulate four game splits to scrutinize agents' learning and generalization of essential principles of interactive physical reasoning, fostering learning through interaction with representative scenarios. Our exploration involves three planning strategies and examines several supervised and reinforcement agents' zero-shot generalization proficiency on I-PHYRE. The outcomes highlight a notable gap between existing learning algorithms and human performance, emphasizing the imperative for more research in enhancing agents with interactive physical reasoning capabilities. The environment and baselines will be made publicly available. | /pdf/fad4695ed5caf629961f820cfffbd439e4662aa5.pdf | zqZqWY5snT | official_comment | 1,700,358,398,315 | 64QGA8W94P | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission9/Authors"
] | title: Response to Reviewer p9BE
comment: We are grateful for your recognition of I-PHYRE's importance and address your concerns below:
> *Add a table*
We will include a table in the revised version, space permitting.
> *The author notes the absence of 3D interactive environments as a limitation.*
While we acknowledge the potential benefits of a 3D interactive benchmark, our focus on 2D is a deliberate step from PHYRE's design, emphasizing interactivity. We hypothesize that the main challenge in 3D relates to perception, and with advancements in the vision field, a robust physics reasoning method will be increasingly valuable.
> *What potential solutions or recommendations does the author suggest for this benchmark?*
As outlined in Section 5.1, we propose focusing on physics modeling, multi-step interventions, and action timing to build more powerful physical reasoning agents. The [schema network (ICLR 17')](https://arxiv.org/pdf/1706.04317.pdf) is a promising starting point, advocating for transparent environment modeling to improve optimization and planning. |
1bbPQShCT2 | I-PHYRE: Interactive Physical Reasoning | [
"Shiqian Li",
"Kewen Wu",
"Chi Zhang",
"Yixin Zhu"
] | Current evaluation protocols predominantly assess physical reasoning in stationary scenes, creating a gap in evaluating agents' abilities to interact with dynamic events. While contemporary methods allow agents to modify initial scene configurations and observe consequences, they lack the capability to interact with events in real time. To address this, we introduce I-PHYRE, a framework that challenges agents to simultaneously exhibit intuitive physical reasoning, multi-step planning, and in-situ intervention. Here, intuitive physical reasoning refers to a quick, approximate understanding of physics to address complex problems; multi-step denotes the need for extensive sequence planning in I-PHYRE, considering each intervention can significantly alter subsequent choices; and in-situ implies the necessity for timely object manipulation within a scene, where minor timing deviations can result in task failure. We formulate four game splits to scrutinize agents' learning and generalization of essential principles of interactive physical reasoning, fostering learning through interaction with representative scenarios. Our exploration involves three planning strategies and examines several supervised and reinforcement agents' zero-shot generalization proficiency on I-PHYRE. The outcomes highlight a notable gap between existing learning algorithms and human performance, emphasizing the imperative for more research in enhancing agents with interactive physical reasoning capabilities. The environment and baselines will be made publicly available. | /pdf/fad4695ed5caf629961f820cfffbd439e4662aa5.pdf | sQPLfDCGBh | official_comment | 1,700,681,918,361 | zqZqWY5snT | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission9/Reviewer_p9BE"
] | title: Response to rebuttal
comment: I would like to thank the authors for their response, after consideration, i would keep my rating. |
1bbPQShCT2 | I-PHYRE: Interactive Physical Reasoning | [
"Shiqian Li",
"Kewen Wu",
"Chi Zhang",
"Yixin Zhu"
] | Current evaluation protocols predominantly assess physical reasoning in stationary scenes, creating a gap in evaluating agents' abilities to interact with dynamic events. While contemporary methods allow agents to modify initial scene configurations and observe consequences, they lack the capability to interact with events in real time. To address this, we introduce I-PHYRE, a framework that challenges agents to simultaneously exhibit intuitive physical reasoning, multi-step planning, and in-situ intervention. Here, intuitive physical reasoning refers to a quick, approximate understanding of physics to address complex problems; multi-step denotes the need for extensive sequence planning in I-PHYRE, considering each intervention can significantly alter subsequent choices; and in-situ implies the necessity for timely object manipulation within a scene, where minor timing deviations can result in task failure. We formulate four game splits to scrutinize agents' learning and generalization of essential principles of interactive physical reasoning, fostering learning through interaction with representative scenarios. Our exploration involves three planning strategies and examines several supervised and reinforcement agents' zero-shot generalization proficiency on I-PHYRE. The outcomes highlight a notable gap between existing learning algorithms and human performance, emphasizing the imperative for more research in enhancing agents with interactive physical reasoning capabilities. The environment and baselines will be made publicly available. | /pdf/fad4695ed5caf629961f820cfffbd439e4662aa5.pdf | w0s8hNRu7D | official_comment | 1,700,682,802,230 | Vc3g1LmXdg | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission9/Reviewer_PPjJ"
] | comment: Thank you addressing my concerns. I have read authors response to all reviews. I would recommend authors to add the success rate in the main paper and results for testing generalization to unseen variants of basic split games. In addition, it would be nice if authors add more details to describe the combined baseline in the main paper, the current description is unclear. It'd also be good to add details about hyperparameters like max steps chosen for RL training in appendix. As most of my concerns are addressed, I will update my rating to reflect the score. |
1bbPQShCT2 | I-PHYRE: Interactive Physical Reasoning | [
"Shiqian Li",
"Kewen Wu",
"Chi Zhang",
"Yixin Zhu"
] | Current evaluation protocols predominantly assess physical reasoning in stationary scenes, creating a gap in evaluating agents' abilities to interact with dynamic events. While contemporary methods allow agents to modify initial scene configurations and observe consequences, they lack the capability to interact with events in real time. To address this, we introduce I-PHYRE, a framework that challenges agents to simultaneously exhibit intuitive physical reasoning, multi-step planning, and in-situ intervention. Here, intuitive physical reasoning refers to a quick, approximate understanding of physics to address complex problems; multi-step denotes the need for extensive sequence planning in I-PHYRE, considering each intervention can significantly alter subsequent choices; and in-situ implies the necessity for timely object manipulation within a scene, where minor timing deviations can result in task failure. We formulate four game splits to scrutinize agents' learning and generalization of essential principles of interactive physical reasoning, fostering learning through interaction with representative scenarios. Our exploration involves three planning strategies and examines several supervised and reinforcement agents' zero-shot generalization proficiency on I-PHYRE. The outcomes highlight a notable gap between existing learning algorithms and human performance, emphasizing the imperative for more research in enhancing agents with interactive physical reasoning capabilities. The environment and baselines will be made publicly available. | /pdf/fad4695ed5caf629961f820cfffbd439e4662aa5.pdf | Vc3g1LmXdg | official_review | 1,698,176,358,341 | 1bbPQShCT2 | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission9/Reviewer_PPjJ"
] | summary: The paper studies the problem of interactive physical reasoning with focus on intuitive physical reasoning (approximate capability to predict physical outcome), multi-step planning (execution of multi-step actions to complete the task), in-situ interventions (necessity for timely object manipulation to succeed). To study this problem authors propose a block elimination task where the task is to ensure all red balls fall into the hole by removing minimum number of blocks. The benchmark consists of 40 unique games segmented into 4 splits: Basic split for training and noisy, compositional and multi-ball splits for testing generalization of physical understanding of agents. Authors also propose 3 different planning strategies: planning in advance, planning on the fly, and combined strategy to solve I-PHYRE using both supervised and reinforcement learned agents. In addition, authors also benchmark human performance on the task and compare it with current learning algorithms.
soundness: 3 good
presentation: 3 good
contribution: 3 good
strengths: 1. Proposed problem statement is interesting and relevant to advance physical understanding ability of learned agents. The proposed block elimination task captures the multi-step planning and necessity for timely actions which are novel aspects of the benchmark
2. The experimental setup demonstrates performance of 3 strong baselines using different learning paradigms i.e. reinforcement and supervised learning. It also demonstrates effectively that proposed baselines significantly underperform and there’s a lot of room for improvement
3. The paper benchmarks and establishes a human baseline for interactive physical reasoning
4. The paper is well written and easy to follow
weaknesses: 1. It is unclear if the learned agents can generalize to unseen games from the basic split. The benchmark tests generalization to noise, compositionality and multi-ball setup but doesn’t present results on unseen games with properties similar to basic split. It would be good if the authors can add a unseen basic split and present performance of trained agents on unseen basic split. The Noisy split seems like a substitute of basic split but as it is using same games from basic split + some noise I am worried it is not a good representative of generalize to unseen games with similar properties
2. The training dataset of 20 games for basic split seems too small a dataset to lead to any meaningful generalization to complex splits on which generalization is being tested. Can authors describe why they chose a small dataset for training? Is it possible to procedurally scale the training data? If not, why?
3. Results presented in fig. 2 and 3 only show comparison of different methods on average reward on evaluation split it doesn’t highlight what is the success rate of these baselines on full task. It would be nice to have a comparison on achieved success on the full task to get a better sense of the results.
4. Poor performance on compositional and multi-ball splits for RL trained baselines look as expected with MLP policies and no large scale RL training. Can authors elaborate more on whether the I-PHYRE benchmark expects emergence of compositional generalization to such complex splits by training on just the basic split games? If yes, have authors tried using recurrent policies? Eventhough limited but recurrent networks have demonstrated some form of compositional generalization [1] in limited machine translation tasks which require “mix-and-match” strategies to solve the task (which is true for I-PHYRE compositional split)
5. The proposed task operates in a simple 2D environment which is less realistic setup
[1] Lake, B. M., & Baroni, M. (2018) Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks
questions: 1. How is the combined strategy baseline implemented and what is the action space of this baseline? From the text in main paper it seems like this baseline predicts continuous values at the start of the episode and keeps updating the full vector after each timestep. Is this correct?
2. Why are the combined strategy baseline trained only for < 100k step in Fig. 3 experiments?
3. Have authors tried using a on-the-fly version of GPT-4 baseline presented in the appendix? How well does that perform? By on-the-fly version, I mean querying GPT-4 after each timestep to output the next action instead of using just the initial scene
The primary concern I have is around the small size of training dataset and the challenges around scaling it. I'd appreciate if authors can discuss issues and concerns around that question. I am open to updating my rating if authors answer my questions.
flag_for_ethics_review: ['No ethics review needed.']
rating: 6: marginally above the acceptance threshold
confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
code_of_conduct: Yes |
1bbPQShCT2 | I-PHYRE: Interactive Physical Reasoning | [
"Shiqian Li",
"Kewen Wu",
"Chi Zhang",
"Yixin Zhu"
] | Current evaluation protocols predominantly assess physical reasoning in stationary scenes, creating a gap in evaluating agents' abilities to interact with dynamic events. While contemporary methods allow agents to modify initial scene configurations and observe consequences, they lack the capability to interact with events in real time. To address this, we introduce I-PHYRE, a framework that challenges agents to simultaneously exhibit intuitive physical reasoning, multi-step planning, and in-situ intervention. Here, intuitive physical reasoning refers to a quick, approximate understanding of physics to address complex problems; multi-step denotes the need for extensive sequence planning in I-PHYRE, considering each intervention can significantly alter subsequent choices; and in-situ implies the necessity for timely object manipulation within a scene, where minor timing deviations can result in task failure. We formulate four game splits to scrutinize agents' learning and generalization of essential principles of interactive physical reasoning, fostering learning through interaction with representative scenarios. Our exploration involves three planning strategies and examines several supervised and reinforcement agents' zero-shot generalization proficiency on I-PHYRE. The outcomes highlight a notable gap between existing learning algorithms and human performance, emphasizing the imperative for more research in enhancing agents with interactive physical reasoning capabilities. The environment and baselines will be made publicly available. | /pdf/fad4695ed5caf629961f820cfffbd439e4662aa5.pdf | fYjjWu8XV4 | official_comment | 1,700,698,605,295 | yxihvLc5Mt | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission9/Reviewer_yzoQ"
] | title: Response to rebuttal
comment: Thanks for your comments. I have read the response and most of my concerns are addressed. I will keep my current rating as 8. |
1bbPQShCT2 | I-PHYRE: Interactive Physical Reasoning | [
"Shiqian Li",
"Kewen Wu",
"Chi Zhang",
"Yixin Zhu"
] | Current evaluation protocols predominantly assess physical reasoning in stationary scenes, creating a gap in evaluating agents' abilities to interact with dynamic events. While contemporary methods allow agents to modify initial scene configurations and observe consequences, they lack the capability to interact with events in real time. To address this, we introduce I-PHYRE, a framework that challenges agents to simultaneously exhibit intuitive physical reasoning, multi-step planning, and in-situ intervention. Here, intuitive physical reasoning refers to a quick, approximate understanding of physics to address complex problems; multi-step denotes the need for extensive sequence planning in I-PHYRE, considering each intervention can significantly alter subsequent choices; and in-situ implies the necessity for timely object manipulation within a scene, where minor timing deviations can result in task failure. We formulate four game splits to scrutinize agents' learning and generalization of essential principles of interactive physical reasoning, fostering learning through interaction with representative scenarios. Our exploration involves three planning strategies and examines several supervised and reinforcement agents' zero-shot generalization proficiency on I-PHYRE. The outcomes highlight a notable gap between existing learning algorithms and human performance, emphasizing the imperative for more research in enhancing agents with interactive physical reasoning capabilities. The environment and baselines will be made publicly available. | /pdf/fad4695ed5caf629961f820cfffbd439e4662aa5.pdf | m9uLkwJ7ob | meta_review | 1,701,584,497,580 | 1bbPQShCT2 | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission9/Area_Chair_BzcP"
] | metareview: The paper received positive ratings from the reviewers (three “marginally accept” and one “accept”). The reviewers had various concerns, for example, (1) lack of evidence for some of the claims, (2) planning strategies not being the contributions of this paper, (3) a small number of games to claim generalization, (4) being limited to 2D environments. The rebuttal addressed some of the concerns by the reviewers. Despite these weaknesses, the AC and the reviewers believe the paper provides an interesting benchmark for intuitive physics, and the experiments are thorough. Therefore, the AC follows the recommendation of the reviewers and recommends acceptance.
justification_for_why_not_higher_score: The paper exhibits some weaknesses. For instance, it does not consider 3D environments, which are more realistic compared to the proposed benchmark. The authors' justification for using 2D environments is that they have followed prior work from four years ago, but this argument is not compelling.
justification_for_why_not_lower_score: The paper proposes an interesting benchmark which helps advance the reasoning for intuitive physics. |
1bbPQShCT2 | I-PHYRE: Interactive Physical Reasoning | [
"Shiqian Li",
"Kewen Wu",
"Chi Zhang",
"Yixin Zhu"
] | Current evaluation protocols predominantly assess physical reasoning in stationary scenes, creating a gap in evaluating agents' abilities to interact with dynamic events. While contemporary methods allow agents to modify initial scene configurations and observe consequences, they lack the capability to interact with events in real time. To address this, we introduce I-PHYRE, a framework that challenges agents to simultaneously exhibit intuitive physical reasoning, multi-step planning, and in-situ intervention. Here, intuitive physical reasoning refers to a quick, approximate understanding of physics to address complex problems; multi-step denotes the need for extensive sequence planning in I-PHYRE, considering each intervention can significantly alter subsequent choices; and in-situ implies the necessity for timely object manipulation within a scene, where minor timing deviations can result in task failure. We formulate four game splits to scrutinize agents' learning and generalization of essential principles of interactive physical reasoning, fostering learning through interaction with representative scenarios. Our exploration involves three planning strategies and examines several supervised and reinforcement agents' zero-shot generalization proficiency on I-PHYRE. The outcomes highlight a notable gap between existing learning algorithms and human performance, emphasizing the imperative for more research in enhancing agents with interactive physical reasoning capabilities. The environment and baselines will be made publicly available. | /pdf/fad4695ed5caf629961f820cfffbd439e4662aa5.pdf | hwrjXYqUGK | decision | 1,705,405,927,153 | 1bbPQShCT2 | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (poster) |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | mQVSo5ovS7 | official_review | 1,699,203,471,923 | m2NVG4Htxs | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Reviewer_VA65"
] | summary: The paper conducted longitudinal analysis of data contamination in large language models (LLMs), a problem where models are evaluated using data that they may have been trained on, thus overstating their capabilities. The authors leveraged natural experiments provided by the training cutoff dates of models like GPT-3.5 and GPT-4 to study contamination. They analyzed Codeforces and Project Euler, websites that release code problems over time, and find evidence of contamination based on the pass rate of LLMs for problems released before their training cutoff dates. The study demonstrates statistically significant associations between a problem’s presence on GitHub and LLM performance for pre-cutoff problems.
soundness: 1 poor
presentation: 3 good
contribution: 2 fair
strengths: 1: The analysis from longitudinal perspective is novel.
2: The comprehensive experiments, large-scale dataset and code base provided by this work will definitely benefit the community of contamination analysis.
3: This paper is well organized and easy to understand.
weaknesses: 1: The results are interesting but not that surprising. Many blogs or discussion in the community about Data Contamination has involved similar results.
2: There is lack of depth analysis about how implicit contamination is possible. If some real examples can be extracted to show how this could happen, it will be much better.
Overall, I do appreciate the effort to investigate the Data Contamination problem from longitudinal side and open-source data/codes. The experiments also show intriguing results. But I believe the contribution of this paper is not enough to be accepted by ICLR, for its limited scope and technical novelty. It's limited to Code datasets. And the only novelty is how to split the "train" and "test" set.
questions: N/A
flag_for_ethics_review: ['No ethics review needed.']
rating: 5: marginally below the acceptance threshold
confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
code_of_conduct: Yes |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | M4jPZxVj0f | official_review | 1,698,873,037,377 | m2NVG4Htxs | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Reviewer_XbnJ"
] | summary: This paper presents a detailed investigation into data contamination in large language models (LLMs), using GPT model training cutoffs to analyze benchmarks released over time. It examines two datasets, Codeforces and Project Euler, revealing clear patterns that suggest contamination based on the LLMs' pass rates correlated with benchmarks' GitHub popularity and release dates. The authors provide a comprehensive dataset, findings, and a framework for future analysis, promoting better practices for benchmark releases in the era of web-scale LLM training.
soundness: 3 good
presentation: 3 good
contribution: 3 good
strengths: The idea to investigate data contamination in LLMs via cutoff datasets makes sense and is interesting, which guarantees that the testing data are not available in the training set of LLMs. And the findings are surprising, revealing that people should deal with the ability of LLMs more carefully. This study shows that LLMs are likely to have generalization problems as well as traditional ML models and deep neural networks. And I think this should raise the attention of ML researchers.
weaknesses: I am not quite familiar with LLMs, and I only have one question about the design of cutoffs. What if a code problem released later is exactly similar as some problems that has already existed? And how to measure the data contamination problem is also important.
questions: Please refer to Weaknesses.
flag_for_ethics_review: ['No ethics review needed.']
details_of_ethics_concerns: N/A
rating: 6: marginally above the acceptance threshold
confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
code_of_conduct: Yes |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | 2AwqqIvl4C | official_comment | 1,700,173,950,717 | mQVSo5ovS7 | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Authors"
] | title: Reply to Reviewer VA65 (1/2)
comment: We thank you for taking time to review the manuscript and provide valuable feedback. We are glad to see that you find our work to be beneficial to the community of contamination analysis, our results intriguing, and our paper well-written. We reply to each of your questions below.
**1. results interesting but not surprising.**
Thank you, we agree that the results are interesting. While we understand that the results may not be surprising to you, they are to other reviewers; XbnJ described “the findings are surprising, revealing that people should deal with the ability of LLMs more carefully”. More importantly, our work is a non-trivial contribution to the community that presents the first scientifically rigorous confirmation of the phenomena that social media posts have speculated about via small-scale/ad hoc analyses. While many informal statements have been made about GPT-4 memorization, we show the first statistically significant differences in performance on problems released before and after the cutoff date. Furthermore, our work lays the groundwork for rigorous analysis of contamination and best practices in the age of LLMs trained on webscale data. Our methodology is becoming increasingly relevant as the community acknowledges the limitations of static benchmarks for LLM evaluation, and shifts toward dynamic/longitudinal benchmarks such as the ones we construct, open-source, and analyze here.
**2. On how implicit contamination is possible.**
To begin, we’d like to clarify our intention for including the words “implicit” and “explicit” in our abstract, specifically in reference to how a given problem might come to be included in a model’s training corpus. Our intention here was to acknowledge that the process of collecting webscale data can lead to the inclusion (in the training set) of examples that were never *intentionally* selected for training–for example, we consider the unintentional inclusion of BIG-bench examples in the GPT-4 training corpus ([OpenAI, 2023](https://arxiv.org/abs/2303.08774)). Through re-posting, removal of watermark information, indirect scraping, or other means, information intended to be omitted from LLM scraping is rarely truly safe. For clarity, we have modified our abstract text to refer to examples that are “intentionally” (explicitly) or “unintentionally” (implicitly) included in the training data.
On a separate note, we have newly added Appendix B.8 which includes a number of samples of generated outputs from GPT-4 and GPT-3.5-Turbo. These can be used to qualitatively examine their outputs across a range of pass rates on both datasets.
**3. On scope and novelty.**
Thank you for your comments. We focus on code generation for a few key reasons: (i) it is a very popular use-case; (ii) most recent LLMs have code as a major part of their training data; (iii) unlike general online natural language text, GitHub has a uniform interface and is easily scrapable and cleanable, making it simultaneously easy for GitHub solutions to be added to train datasets as well as for us to assess GitHub presence; and (iv) code generation and problem solving datasets have objective correctness metrics (test cases) producing objective evaluations of open-ended generations that don’t require another model or human in the loop.
We also note that dataset desiderata for the methodological approach we propose include: (i) problems must have been released over a sufficiently long time-horizon, such that it is possible to partition examples into pre- and post- GPT training cutoff subsets based on problem release date. This restrictions precludes some other popular benchmarks which have been released in a single time-step (e.g. [HumanEval](https://github.com/openai/human-eval) or [MBPP](https://arxiv.org/pdf/2108.07732.pdf)). (ii) problems should consist of high-quality questions requiring non-trivial solution generation that admit objective evaluation functions/correctness measures.
We have thus focused our efforts on the non-trivial scraping and processing required to analyze Project Euler and Codeforces datasets in the way that we have. We also emphasize that our work introduces tools and paves the way for further rigorous study related to data contamination, which gives our work the potential to have high impact in the community.
We’d also like to highlight that our paper should also be viewed as a methodological one. Specifically, we believe our work takes a novel view on data contamination estimation by employing a natural experiment – a concept borrowed from the economic literature. We use this concept effectively here and argue for its further use, particularly in experimenting, evaluating, and discovering the intricacies of LLMs. We argue that this evaluation methodology should be used in particular on closed-source models which refuse to reveal critical and important details about their development/training. We have updated our manuscript to highlight this point further.
(1/2) |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | FwHY9yFTks | official_comment | 1,700,174,025,495 | mQVSo5ovS7 | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Authors"
] | title: Reply to Reviewer VA65 (2/2)
comment: Thank you very much, once again, for your helpful feedback. If you find our responses satisfying, we respectfully ask that you consider increasing your score. We are still conducting new analyses and are excited to update you with their results. We would be very happy to answer any follow-up or additional questions you have.
(2/2) |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | WFd2jpAc4g | official_comment | 1,700,174,150,480 | M4jPZxVj0f | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Authors"
] | title: Reply to Reviewer XbnJ (1/2)
comment: We thank you for your thoughtful review. We appreciate that you find our longitudinal study interesting and that it should be raised to the attention of ML researchers.
**1. Design of the cutoff: similar code problems.**
Thank you for the great question, and we understand that your question lies in the methodology of our analysis.
Recall that we propose a method to measure contamination through a natural experiment where the training cutoff bifurcates an evaluation set. We appreciate that you agree with this methodological approach and our resulting analysis, and we agree that checking for duplicated questions before and after the cutoff is an important step.
In summary, based on your suggestion, we did find 56 exact duplicates, none of which straddle the cutoff (0.7% of the total set of problems). Community experts identified 8 similar questions in the Codeforces problems; only 5 straddled pre- and post-cutoff. The overall effect on our results should be minimal considering the large drop-off in performance after the cutoff. In the coming days, we will rerun our analysis and update you and the paper before the end of the rebuttal period. We explain more below.
For *textual duplicates*, we have confirmed that there are none in Project Euler. In Codeforces, we have identified a set of 40 unique problems (out of a total of more than 8,300) for which the dataset contains multiple (i.e., 2-3) copies. This can occur if a competition problem is later re-posted as part of a practice set. All duplicated tuples belong to the subset of problems released before the GPT training cut-off. As the focus of our analysis is on comparing performance on examples released before versus after the training cutoff, we would be concerned if a majority of these examples were “cut-crossing”, but do not find that to be the case. Before the discussion period concludes, we will re-run our analyses omitting the duplicates, and will provide you with updated results. We expect that the impact of removing 56 observations (corresponding to the duplicates of the aforementioned 40 problems) from the pre-cutoff subset which contains >6,000 observations will be minimal, and do not expect the qualitative nature of our conclusions to change.
As for *semantic duplicates*, it is certainly non-trivial to check for the existence of such problems. Thankfully, the Codeforces community has [discussed](https://codeforces.com/blog/entry/113016) this topic before, highlighting 8 near-duplicate pairs, of which 5 were pairs with one question on either side of the cutoff, and the other three were pairs with both questions were released before the cutoff.
For the sake of examining those 5 pairs which straddled the cutoff, we compare GPT-4 performance:
Pre-Cutoff Problem | Post-Cutoff Problem | Pre-Cutoff Problem Pass Rate | Post-Cutoff Problem Pass Rate
|-|-|-|-|
765_F | 1793_F | 0.50 | 1.00
652_C | 1771_B | 0.38 | 0.00
342_E | 1790_F | 1.00 | 0.50
923_B | 1795_C | 0.17 | 0.00
1462_C | 1714_C | 0.00 | 0.00
Mean before cutoff: 0.41, Mean after cutoff: 0.30
We also compare GPT-3.5-Turbo performance:
Pre-Cutoff Problem | Post-Cutoff Problem | Pre-Cutoff Problem Pass Rate | Post-Cutoff Problem Pass Rate
|-|-|-|-|
765_F | 1793_F | 0.50 | 1.00
652_C | 1771_B | 0.00 | 0.00
342_E | 1790_F | 0.00 | 0.50
923_B | 1795_C | 0.33 | 0.00
1462_C | 1714_C | 0.14 | 0.00
Mean before cutoff: 0.20, Mean after cutoff: 0.30
A potential risk of including such semantically similar problem instances in our evaluation is that the performance on post-cutoff examples would be upwardly biased, since the model is actually seeing examples which are quite similar to those it has seen in training, despite our best efforts to control for such exposure via our cutoff-based partition of the dataset. The small size of this subset means this effect, were it to exist, would be relatively minimal, and it would be biased against, rather than in favor of, the conclusions we ultimately draw. As such, we do not feel that the presence of such examples undermines our analysis. We will provide an updated analysis in the coming days to confirm.
Finally, we’d like to acknowledge that we agree with your broader point -- i.e., the question of *how* to measure or detect when data contamination has occurred is important. We contend that our approach, which leverages the cutoff date as a source of naturally arising variation, should be viewed as a compelling complement to existing approaches, which often rely on the ability to manipulate the underlying dataset and/or synthetically introduce contamination and then measure the impact on downstream tasks. We are able to determine that contamination is likely to have occurred by evaluating performance leveraging a freely available feature of the dataset (i.e., each problem’s release date, relative to the cutoff), without requiring any sort of post-facto intervention or manipulation.
(1/2) |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | QcgjgTUZBl | official_comment | 1,700,174,196,156 | M4jPZxVj0f | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Authors"
] | title: Reply to Reviewer XbnJ (2/2)
comment: **2. Additional experimental updates**
Finally, we would like to highlight a few additional experiments and updates we finished during the rebuttal period:
1. We have rendered the coefficient tables as forest plots (suggested by EfiM) and supplemented the regression tables with them throughout. See Figures 2 and 3 in the main paper, and associated figures in the Appendix. This change is very helpful for the reader to quickly visualize the regression coefficients and see how after the coefficients become closer to 1 (or have no effect) after the cutoff.
2. We have conducted new analyses to assess whether the drop-off in performance that we observe for problem examples released after the GPT training cutoff might be attributable to (potentially latent) covariate shifts.
3. We added a new section B.6 to describe our experiments with open source LLMs; we also have plans to expand to other LLMs during the rebuttal period and are awaiting results.
4. We added examples of generations from the LLMs in Section B.8.
Thank you once again, for your excellent questions. Please let us know if you have any follow-up questions or further suggestions. We would be happy to continue the discussion, and we will update you with our reanalysis soon.
(2/2) |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | AIrmfWxXs3 | official_comment | 1,700,174,322,388 | gb4bzg4HXx | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Authors"
] | title: Reply to Reviewer EfiM (1/2)
comment: We thank you very much for your thorough and enthusiastic review. We appreciate that you find our work well motivated, clearly conceptualized, and well-executed. We agree with all of your suggestions and have done our best to address them below in and in the updated manuscript. Please let us know if we have addressed your comments.
**W1+2: Fig 1 minor changes.**
Thank you for pointing out these stylistic changes. We have now moved the legend and updated “Github” -> “GitHub”.
**W3: Fig 1 for Project Euler.**
Thank you; this plot can be found in Figure 17; the associated plot for GPT3.5 can be found in Figure 19. Here you’ll see how the performance of these systems on Project Euler is much decreased on problems released before the training cutoffs, and entirely wiped out for problems released after the cutoff — neither GPT-4 or GPT-3.5 yield any pass rates above 0 for problems after the training cutoff.
**W4: Pass rate when log(Github Presence)=0.**
This is a great insight! We believe that the main factor at play is b) - GitHub Presence is a strong indicator of availability in the training set, but it still underestimates the true availability in the training set. For example, *all* Codeforces problems show up on the internet in several places (e.g., [here](https://cf.kira924age.com/#/table/), [here in pdf](https://github.com/AliOsm/PDF-CodeForces-Problems)). Therefore, we believe this is a major reason for why the completion rate is higher before the cutoff even for problems with log(GitHub Presence)=0. We comment on your other hypotheses as well.
1. In terms of the models themselves, we used the exact same models when evaluating problems released before and after the cutoff date (in fact, the evaluations were performed in one large batch and later separated by date).
2. As we state above, we believe that this is the main factor. We agree that “GitHub presence” may under-estimate the total presence of a Codeforces problem in the GPT training data. For example, *all* Codeforces problems show up on the internet in several places (e.g., [here](https://cf.kira924age.com/#/table/), [here in pdf](https://github.com/AliOsm/PDF-CodeForces-Problems)).
3. It is challenging to fully rule out this possibility, but qualitatively, we cannot spot any change over time in the type of Codeforces problems released. It is possible that a difference in problems does cause a small change in pass rate after the cutoff, but we believe that the majority of the observed difference in pass rate can be attributed to (2).
1. To investigate, we conduct a set of additional experiments to assess whether the distribution over tags (only available for Codeforces) and/or difficulty level (available for Codeforces and Project Euler) changed in a statistically significant way during the post-cutoff period, relative to the pre-cutoff period. We present this analysis in its entirety, consisting of qualitative plots and $\chi^2$ tests, in Appendix B.7 of our updated submission pdf, and summarize key findings below:
- For **Codeforces**, we do not find any statistically significant difference in the distribution of normalized counts over problem tags during the pre-vs. post- period. We also do not find any statistically significant difference in the distribution over discretized difficulty scores during the pre-vs. Post-period.
- For **Project Euler**, problem tags are not publicly available, so we cannot perform tag analysis. We do not find any statistically significant difference in the distribution over discretized difficulty scores during the pre-vs. Post-period.
- These findings help to mitigate concerns that the drop-off in performance we observe might be attributable to significant changes in the distribution over tags (for Codeforces) and/or over difficulty levels during the post-cutoff period.
4. We ran the same evaluation script for all problems, and only then separated by date: https://anonymous.4open.science/r/to-the-cutoff-review-253A/eval/chronological_evaluation/chronological_dataset.py (line 162).
(1/2) |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | hd3M3a61gT | official_comment | 1,700,174,376,548 | gb4bzg4HXx | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Authors"
] | title: Reply to Reviewer EfiM (2/2)
comment: **W5: GPT-generated outputs on problems released after the cutoff.**
To address this question, we would like to: (a) make a minor methodological clarification; and (b) note that we have added a new section to the appendix (B.8) that contains numerous examples of GPT-4 and GPT-3.5.-Turbo output (i.e., generated code) for Codeforces problems released before and after the GPT training cutoff. (We restrict our attention to Codeforces because our Project Euler analysis focused on the correctness of generated numerical solutions, rather than evaluating the outputs of generated code).
With respect to (a), we note for clarity that what we are trying to control for in our natural experiment are factors with the potential to influence an LLM’s ability to produce a functionally or numerically correct solution to a problem (i.e., the problem’s difficulty), and/or the likelihood that the LLM would have seen this problem during training (i.e., GitHub presence, since scraped GitHub repositories are part of the GPT training corpus). Thus, GPT need not produce *worse* code when we evaluate its ability to produce functionally correct solutions for problems released after the cutoff. Indeed, given that we perform all of our evaluations at a single point in time, the code-generating ability of each GPT instance is “fixed”--what is (potentially) changing is the artificially inflated performance benefit (or lack thereof) the model may demonstrate when it is evaluated on examples it *has* seen during training versus those it has not.
With respect to (b)---i.e., our new appendix section—because the full description and generated output associated with a given problem can both be quite lengthy, we have constructed this new section by partitioning the Codeforces problems and along two dimensions: (1) release date pre- vs. post GPT cutoff; (2) discretized LLM functional correctness. With respect to (2), for a given LLM and problem, functional correctness is computed as the ratio of test cases that the LLM’s generated code passes (see Section 4 for details). Thus, functional correctness takes values in [0,1.0]. We discretize via the following mapping:
$\lambda: x \mapsto 0 | (x \leq Q1) \lor 1 | (Q1 < x \leq Q2) \lor 2 | (Q2 < x \leq Q3) \lor 3 | x > Q3$
where $x$ represents a given problem's raw difficulty score, and Q1, Q2, and Q3 correspond to the first, second, and third quartiles, respectively.
We thus consider 8 subgroups per model (i.e., {pre, post} x {0,1,2,3}), and draw two examples uniformly at random from each subgroup. We present each generation along with corresponding problem title, ID, difficulty score, url, and the LLM’s functional correctness score. Our intent here is to avoid cherry-picking of results, while facilitating the visualization of a representative set of generations. We also note that the generated code produced by each model for *every* Codeforces problem is available in the results file that we provide as part of our supplementary material. We view additional/comparative static and behavioral analyses of these outputs as a promising direction for future work.
**Table 1.**
Thanks once again for this suggestion! We have now added these figures. We replaced Table 1 and Table 2 with their corresponding forest plots. Additionally, we have added the forest plots into the appendix for every regression coefficient table. These figures give new visual insights so the reader can quickly understand the relative relationship between the regression coefficients. For example, we can easily see that the coefficients for both Difficulty and GitHub presence get closer to 1 (the null effect) after the cutoff.
Thank you once again, for your excellent points. Please let us know if you have any follow-up questions or further suggestions. We are still conducting new analyses and are excited to update you with their results. We look forward to continuing the discussion.
(2/2) |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | ZFi1oy8DGf | official_comment | 1,700,174,482,528 | XRVbsbQxXL | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Authors"
] | title: Reply to Reviewer yfjE
comment: We thank you very much for your thorough and extensive review. We appreciate that you find our work tackles an important issue, while controlling for confounding variables. We reply to your questions below.
**1. Additional LLMs.**
Thank you for the suggestion! We originally focused on GPT-3.5 and GPT-4 for a few critical reasons: (i) relevance to the large community, in and out of research, that rely on these models, (ii) these models’ high performance on code generation tasks. With respect to this second point, we clarify that when a model’s pass rate on these problems is prohibitively low, we are effectively unable to observe non-trivial trends or extract longitudinal insights–and we have found this to be the case for some additional models we have tested. For example, GPT-3.5 achieved 27% pass rate before the cutoff and 13% after, while GPT-4 achieved 37% pass rate before and 16% after. In contrast, the pass rates we obtained for Text-Davinci-002 with a comparable prompting strategy are less than 1% before *and* after–yielding it impossible to conduct a substantive analysis featuring functional correctness.
We also are actively collecting data from some other models, including a modern open-source LLM, and plan on providing these results when the results are available soon (we aim to do so before the end of this discussion window). This discussion is also repeated in the new Appendix B.6.
**2. Longitudinal datasets.**
We agree that our analysis requires longitudinal datasets; however, we anticipate that the popularity of such benchmarks will continue to rise, as releasing datasets over time is potentially the only foolproof method for avoiding data contamination. For example, creators of recent benchmarks such as [BIG-Bench](https://arxiv.org/abs/2206.04615) are consciously updating their benchmarks over time. Even other non-LLM benchmarks such as [OpenML Benchmarking Suites](https://arxiv.org/abs/1708.03731) are being updated over time, since there is a related push to ensure that the community [does not overfit](https://arxiv.org/abs/2112.01716) to any single set of benchmarks. We are optimistic that as staggered release of benchmark datasets becomes increasingly common and/or an accepted best practice, that the framework we present here can be more widely applied.
Furthermore, we note that while our analysis can only be conducted on longitudinal datasets, its implications extend to any dataset which can be contained in a webscale training dataset; it leverages longitudinal structure in a test benchmark to reveal the massive contamination effect which can be observed on “seen” examples of any dataset.
**3. Minor comments.**
Thank you for catching these formatting issues; they have now been fixed in the updated pdf.
Thank you once again, for your excellent points. Please let us know if you have any follow-up questions or further suggestions. We are still conducting new analyses and are excited to update you with their results. We would be happy to continue the discussion. |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | 7nIlxdc76M | official_comment | 1,700,439,565,435 | AIrmfWxXs3 | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Reviewer_EfiM"
] | title: Response to Authors (1/2)
comment: Overall, I'm happy with your changes.
> W1+2: Fig 1 minor changes.
Wonderful - thank you!!
> W3: Fig 1 for Project Euler. Thank you; this plot can be found in Figure 17; the associated plot for GPT3.5 can be found in Figure 19
I think the figure numbering might have changed between your answer and the revised manuscript. I see Figure 10 has a caption "Figure 10: Marginal Effects of Pass Rate for GPT-4 on the Project Euler Dataset" but a title of "Functional Correctness Marginal Effects Plots for GPT−4 on Codeforces". Is this figure for Euler or Codeforces?
> W4: Pass rate when log(Github Presence)=0. This is a great insight! We believe that the main factor at play is b) - GitHub Presence is a strong indicator of availability in the training set, but it still underestimates the true availability in the training set.
This seems like a reasonable answer. |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | TsTxMJs19C | official_comment | 1,700,440,307,834 | gb4bzg4HXx | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Reviewer_EfiM"
] | title: Response to Authors (1/2)
comment: > Figure 2
I think this looks very nice. Thank you!! Same with Figure 3.
> Thus, GPT need not produce _worse_ code when we evaluate its ability to produce functionally correct solutions for problems released after the cutoff.
I'm not sure I understand this point. We might be using different terminology. I agree that the evaluations are run at a single point in time, and I believe your evidence (e.g., Figure 1) that increasing GitHub presence is correlated with functional correctness. What I was trying to confirm is that generated code for problems released after cutoff is "worse" in the sense that we would all agree the generated code for problems released after cutoff pass fewer test cases than generated code for problems released before cutoff. My intention was to rule out that the evaluation process itself somehow (potentially unintentionally) affected the performance.
For example, suppose that before the cutoff, Codeforces required code to be submitted in format A (e.g., the main function should be called `main()`), but after the cutoff, Codeforces required code to be submitted in format B (e.g., the main function should be called `start()`). If so, I think it would be hardly surprising if a model pretrained on format A would perform less well when evaluated on format B, since the model wouldn't know that format A is deprecated and format B is appropriate?
This is what I meant by confirming that the code post-cutoff is "wrong". Perhaps "less functionally useful" is a better term. I want to know that the generated code for problems post-cutoff is functionally less useful than code for problems pre-cutoff. I'm not interested in things like formatting, variable name choice, etc. that we might consider when discussing code quality.
Could you please clarify what you meant by "GPT need not produce _worse_ code" post-cutoff? How is GPT-4 scoring worse post-cutoff if it isn't producing worse code?
I went to increase your score to 7, but apparently 7 is not an option for ICLR. If we can reach agreement on this topic, and I think we can, then I would be happy to bump you up to an 8. |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | QjzMfYlzfc | official_comment | 1,700,613,840,698 | XRVbsbQxXL | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Authors"
] | title: New Models
comment: Thank you again for your thoughtful review. We are writing with more experiments which we have conducted during the rebuttal period which address you concern *W1: Additional LLMs*. We liked your suggestion of adding more LLMs beyond the GPT-4, GPT-3.5, and Davinci models which were in our original manuscript.
We have now added the analysis of two more LLMs on Codeforces: [Codey/Code Bison](https://cloud.google.com/vertex-ai/docs/generative-ai/code/code-models-overview) (Google’s code generation foundation model, released around the same time as PaLM 2), and [Code-Llama](https://huggingface.co/codellama).
As for Code Bison, we very interestingly observe the same behavior that we did for GPT-4 and GPT-3.5-Turbo. Specifically, after the training cutoff of Code Bison, we see that the GitHub presence metric is no longer a significant predictor of functional correctness for this model. code-bison@001’s training cutoff, although not publicly known, is probably around February 2023, as this model was released [within weeks](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/model-versioning) of text-bison@001 and chat-bison@001, which have [known training cutoffs of February 2023](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/models). Accordingly, we conducted our analysis assuming this February 2023 cutoff. We have updated our manuscript to show this by updating Figures 2 and including all additional figures and tables in the appendix as with the GPT models.
The results from this additional model shows that the training cutoff itself is a moderating factor in the code generation completion performance. This conclusion is further bolstered by the almost 2 year separation between the GPT cutoff and the Code Bison cutoff and yet still producing the same qualitative finding.
We additionally worked with Code-Llama and attempted to the best of our abilities to get it to perform on these questions. Unfortunately, even on the largest and most capable model for our use case, 34b-Instruct, we could not get Code-Llama to output answers which yielded pass rates above 1% on Codeforces. This result follows that of our observation around Text-Davinci-002.
We hope that these additional experiments address your main concern about the limited number and models in our experiments. We now conduct analysis of 5 LLMs with various cutoff dates, and we observe signs of contamination for those models which yield pass rates above 1%. We look forward to answering any additional questions you may have, before the author response period closes tomorrow (end of day Nov 22nd). Thanks! |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | lmf7PcDWV8 | official_comment | 1,700,639,378,073 | TsTxMJs19C | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Authors"
] | title: Second reply to Reviewer EfiM
comment: Thanks for your reply; we are glad that you are overall happy with the updates!
**Figure numbering.**
Yes, thank you for pointing this out. We have now corrected all plots and captions in the manuscript. The Project Euler plots are Fig 28 (GPT-4) and Fig 30 (GPT-3.5).
**GPT need not produce worse code.**
Thank you for clarifying, and now we understand the confusion. Our statement “GPT need not produce worse code” was meant as a clarification to your original statement “[GPT-4] is indeed outputting worse code [after the cutoff].” From this phrasing, a reader might think that the *model* changes when we evaluated pre-cutoff problems vs. post-cutoff problems, but in fact we kept the model exactly the same for all problems. Now we understand that your actual meaning was that (for example) the evaluation procedure might have changed between pre- and post-cutoff data, or other subtle changes.
With respect to the evaluation process itself, first, we note that from our side, everything remains the same pre- and post-cutoff: we keep the LLM exactly the same, and we use the same slightly modified version of [DeepMind’s evaluation handle](https://github.com/google-deepmind/code_contests) for all problems. From Codeforces’ side, the expected answer format is unchanged from early problems to late problems. Codeforces problems expect submissions of the following format: the submission must comprise a complete code file that, when executed, reads some data in from stdin and produces results in stdout (note: occasionally, there are problems with an alternative “interactive” evaluation strategy in which the submitted program can provide temporary responses that lead to more input being provided to the program; these problems have always been omitted from our analysis).
We therefore believe formatting details to be ruled out as an explanation for the shift in performance. There could potentially be more subtle changes, but we are not able to see any from looking at the problem statements and LLM outputs associated with the randomly selected subset of CodeForces problems released before and after cutoff (for varying levels of GPT functional correctness) in Appendix B.8.1.
Finally, in fact, we have just finished adding initial [Code Bison](https://cloud.google.com/vertex-ai/docs/generative-ai/code/code-models-overview) (Google’s code generation foundation model, released around the same time as PaLM 2) results for Codeforces to our manuscript. Interestingly, we find a statistically significant shift in behavior before Code Bison’s cutoff of February 2023, and we do *not* find a significant shift in behavior if we artificially set the cutoff to Sept 2021 (GPT’s cutoff). This result gives further evidence to rule out “silly possibilities” such as a shift in Codeforces formatting, since we otherwise wouldn’t see such big changes for two different models, at their two different respective cutoff dates.
Thank you once again for your comments; we hope that we have clarified our first response, and we are glad that our paper now has more evidence to rule out extraneous factors. Please let us know if you have any further questions or comments. We would be happy to reply any time before the author response period closes tomorrow (end of day Nov 22nd)! |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | D9rEeOBchE | official_comment | 1,700,680,081,989 | QjzMfYlzfc | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Reviewer_yfjE"
] | comment: Thank you for your comments and updated results. I will update my score accordingly.
Just as a remark:
> We additionally worked with Code-Llama and attempted to the best of our abilities to get it to perform on these questions. Unfortunately, even on the largest and most capable model for our use case, 34b-Instruct, we could not get Code-Llama to output answers which yielded pass rates above 1% on Codeforces. This result follows that of our observation around Text-Davinci-002.
I suspect this may be an artifact of "pass rate" being a discontinuous metric (on a single sample) that cannot adequately measure *partial* progress. It would be interesting to look for a smoother metric. A simple choice might be to look at the perplexity of a/the correct solution, but I am not sure how well that would be calibrated. |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | XRVbsbQxXL | official_review | 1,697,974,261,321 | m2NVG4Htxs | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Reviewer_yfjE"
] | summary: The paper investigates data contamination of GPT-3.5-Turbo and GPT-4 with problems from Codeforces and Project Euler. It does so by analysing the passrates in relation to the GitHub presence and notes a positive correlation for data before the cutoff, but no significant correlation after this date.
soundness: 3 good
presentation: 3 good
contribution: 2 fair
strengths: 1. Identifying data contamination is an important issue, especially for evaluation datasets that are often used to create rankings.
2. Including problem difficulty as an independent variable is an important step in isolating the confounding effect of item difficulty on pass rates.
3. I appreciate the openness in referencing blog posts and tweets that anecdotally suggested possible contamination prior to this work
weaknesses: 1. The methodology is only applied to GPT-3.5/GPT-4, where training details are unknown. In particular, as noted in footnote 1, OpenAI has admitted to using a small amount of data beyond the cutoff date. While I understand the choice of the GPT family as a commonly used model, it would have been better to verify the approach with fully open models where more training details are available (and more trustworthy).
2. The methodology requires underlying datasets that are longitudinal in nature, i.e. release problems/individual tasks over time; this limits the applicability to sources other than Project Euler / Codeforces.
questions: ### Minor Comments
* Particularly in section 2, some citations are formatted differently, with the author names outside the parentheses; in sequences of different citations, readability could be improved by using the same citation format as in section 1.
flag_for_ethics_review: ['No ethics review needed.']
rating: 8: accept, good paper
confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
code_of_conduct: Yes |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | iZccpn8xN4 | official_comment | 1,700,695,074,049 | m2NVG4Htxs | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Authors"
] | title: Response to all Reviewers
comment: We thank all the reviewers for their insightful feedback and suggestions. We present the first thorough, longitudinal analysis of LLM data contamination, showing that LLMs’ ability to solve coding problems changes dramatically as a function of metrics such as problem release date and GitHub popularity. We appreciate that reviewers find that our work is **well-motivated** and an **important issue** for the community (VA65, XbnJ, EfiM, yfjE). Furthermore, we appreciate that most reviewers found our **experiments well-executed** (VA65, EfiM, yfjE) and our paper **well-written** (VA65, EfiM). We have now updated our paper to include all reviewer suggestions. We highlight a few of the main changes below:
* **We included analysis for two new models: Google’s [Code Bison](https://ai.google/discover/foundation-models/) (released around the same time as PaLM 2) and Code-Llama 34b Instruct.** Code Bison achieves strong performance on Codeforces, and we are able to show the exact same statistical significance of the differing pass rates before and after its assumed training cutoff date of February 2023. In addition to complementing our results, this gives ample evidence that our conclusions are not based on extraneous factors such as a subtle difference in the evaluation protocol before and after the cutoff date. On Code-Llama 34b Instruct, we were not able to get Code-Llama to output answers which yielded pass rates above 1% on Codeforces. We qualitatively observe that many outputs are very far away from a correct answer, for example, giving code in C++ or non-English text despite English prompting asking for Python code. We will shortly add the raw results to our anonymous repository.
* We provide evidence that several possible covariate shifts are not responsible for the observed drop off in performance in problems released after the GPT cutoff.
* We added 32 examples of pairs of Codeforce problems and GPT outputs, sorted into categories based on LLM model, release date, and functional correctness. These examples allow readers to see qualitative features of the output code, and they act as an additional check to rule out additional extraneous factors.
Thank you once again, for your reviews, and we would be happy to answer any follow-up questions. |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | ufoSHwnvOs | official_comment | 1,700,695,130,974 | QcgjgTUZBl | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Authors"
] | title: Follow up to Reviewer XbnJ
comment: We would like to highlight for you that in our last pdf revision, we updated our functional correctness regression analyses with the removal of the duplicate Codeforces problems. You can see in Figures 10-25 and Tables 1-9 that the removal did indeed have *no impact* on the conclusions of our work. We have also included other updates to our paper which significantly strengthen it, and you can see a summary of them [here](https://openreview.net/forum?id=m2NVG4Htxs¬eId=mQVSo5ovS7). Thank you again for your work to help improve our paper and your thoughtful review! |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | P1VBi6DiNV | official_comment | 1,700,712,186,564 | ufoSHwnvOs | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Reviewer_XbnJ"
] | title: Thank you
comment: Thanks for your rebuttal, and most of my concerns are addressed. Since I'm not an expert in this field, I would like to maintain my score. |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | gb4bzg4HXx | official_review | 1,698,684,490,826 | m2NVG4Htxs | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Reviewer_EfiM"
] | summary: Assess whether GPT performance at coding (sometimes called program synthesis) was possibly affected by contamination of pretraining data using a naturally occurring experiment (i.e. comparing scores before and after the pretraining knowledge cutoff dates).
soundness: 3 good
presentation: 4 excellent
contribution: 3 good
strengths: Overall, I really liked this paper. I thought it was well motivated, clearly conceptualized, well executed and somewhat thorough. I have a small number of requested changes, and if the authors and I agree that the changes are sensible and if the authors agree to make the changes, I would be happy to increase my score.
weaknesses: > Figure 1: Marginal Effects of Pass Rate Metric
I think this is an amazing figure. 5 comments, ordered from minor to major:
1. easy: Stacking log(Github Presence) and log(Difficulty) at the bottom makes reading the figure tricky. I might suggest moving log(Difficulty) to the right side.
2. easy: GitHub is stylized "GitHub", not "Github"
3. medium: Where is the equivalent plot for Project Euler? I might have missed this, but I cannot find it in the main text or appendix.
4. hard: The pass rate is significantly lower for easy and medium problems, even for log(Github Presence) = 0. I understand that GitHub Presence is a proxy, but I would think that log(GitHub Presence) = 0 is our best guess for "low or no contamination", but there's still a 10-20% decrease in pass rate. Why? I can think of 2-4 possible answers: (a) GPT-4 genuinely becomes much worse after the knowledge cutoff; (b) GitHub presence is inadequate and/or misleading, (c) the distribution of Codeforce problems changed after GPT-4 was finished pretraining, or (d) something changed in how the pass rate is calculated on generated outputs. More explanations might also be possible. Is there some way for the authors to try to investigate the cause of this shift?
5. hard: I was hoping for either a qualitative or quantitative analysis about what GPT-4 is outputting on Codeforces problems released after the cutoff, but I can't find even a single example of the raw generated outputs. Could the authors please provide some manual examples, even in the appendix, to convincingly demonstrate that GTP-4 is indeed outputting worse code? I want to rule out that silly possibilities (e.g., a shift in formatting) are affecting the results.
> Table 1
I personally find Tables are less effective at communicating than Figures. Since these are regression tables, could you possibly consider switching to a Forest plot of regression coefficients? Some random examples here:
- https://www.researchgate.net/figure/Forest-plot-of-regression-coefficients-95-confidence-interval-for-the-association_fig1_331119872
- https://www.researchgate.net/figure/Coefficient-plots-from-linear-regression-predicting-what-makes-an-interaction-meaningful_fig1_343608677
- http://www.strengejacke.de/sjPlot/reference/plot_models.html.
To make my suggestion as concrete as possible, using terminology from matplotlib & seaborn (assuming you're using Python, but I'm sure R could do this as well), I'm specifically thinking that your X axis should be the estimated parameters and confidence intervals, Y would be the covariates (i.e. Difficulty & GitHub presence), the Hue is either Before Cutoff or After Cutoff, and you have two side-by-side axes, one for GPT4 and the other for GPT3.5.
I personally would prefer all regression tables to be visualized as such (Tables 1, 2, and those in the appendix).
questions: Not a question, but I want to note that:
1. I like the use of Pass Rate in lieu of pass@1. I think that's a very sensible choice.
2. I like the citation of Horace He's and Chris Cundy's tweets. Very good scholarship, even if Tweets aren't "published" in a traditional sense.
flag_for_ethics_review: ['No ethics review needed.']
details_of_ethics_concerns: N/A
rating: 8: accept, good paper
confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
code_of_conduct: Yes |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | C1qmslACZl | meta_review | 1,702,877,367,899 | m2NVG4Htxs | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission11/Area_Chair_2bKM"
] | metareview: Reviewers were largely positive about this work, which demonstrates clear evidence for data contamination via in coding and math problems based on prevalence of related information on GitHub. The authors accomplish this by exploiting discontinuities in training data. This is of broad interest to the community and provides a simple mechanism for evaluating leakage. A few pieces of feedback for the CR.
1. The visualizations of the regression models such as the marginal effects plots in Fig1 or Fig12 provide a more interpretable explanation of the results of the regression analysis than reporting the logistic regression coefficients and reporting on the log-odds ratios. I would recommend using such visualizations or reporting simple differences in means when describing the average effect of the cutoff period.
2. It would be nice to see or have some clear reporting on the average change in performance before/after on the benchmarks, rather than seeing the means broken out by prevalence.
3. log(difficulty) itself is not inherently meaningful to the reader, and I am not sure how these values were chosen. I would recommend reporting on the quartiles or some other percentile-based breakdown. Having visualizations of the distribution of difficulties (e.g., as a histogram, density, CDF) would also be informative for the SM.
4. What do the investigations in the authors' submission tell us about the prevalence of misleading results in recent literature? For example, how well do the problems reported in the work reflect benchmarks used in the literature? The GPT-4 whitepaper appears to indicate that GPT-4 fails to complete 50% of the codeforce tests, and the authors' analysis includes some control for contamination (with seemingly identical results for both). In this latter case, why is there such a discrepancy here? While the details of the contamination testing protocol are not well defined in the GPT-4 paper, perhaps the authors can share additional details via personal correspondence or some updated version of the arxiv paper.
5. The fact that code-llama performs so poorly on codeforce is quite surprising. Is it possible that there is a bug in the evaluation? In any case some more discussion here would be nice to see.
6. Saying that the authors take an "experimental economics view" is a little bit of an overstatement. Indeed, while this is an example of a discontinuity that would delight many econometricians, the analysis is neither experimental (you could call it a quasi-experiment) nor specific to the type of identification strategy commonly used in the social sciences, statistics, and epidemiology community.
7. This is perhaps beyond the scope of this paper, but seeing similar results on more diverse tasks could help readers understand the generality of the results. This in turn gives the community a stronger sense of how prevalent this issue is, and give LLM researchers more ideas for how to reduce contamination.
justification_for_why_not_higher_score: The analysis is not particularly extensive and could consider more diverse benchmark problems. This limits the authors' ability to speak to the prevalence of these types of issues in the literature (or how general the risk is).
justification_for_why_not_lower_score: This work is of broad interest and represents (as far as I am aware) the first rigorous study of this kind. My confidence is a little low here since I am not an expert in dataset leakage. |
m2NVG4Htxs | To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. | /pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf | zBeveF401R | decision | 1,705,405,927,165 | m2NVG4Htxs | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (poster) |
E6EbeJR20o | A Large-Scale 3D Face Mesh Video Dataset via Neural Re-parameterized Optimization | [
"Kim Youwang",
"Lee Hyun",
"Kim Sung-Bin",
"Suekyeong Nam",
"Janghoon Ju",
"Tae-Hyun Oh"
] | We propose NeuFace, a 3D face mesh pseudo annotation method on videos via neural re-parameterized optimization. Despite the huge progress in 3D face reconstruction methods, generating reliable 3D face labels for in-the-wild dynamic videos remains challenging. Using NeuFace optimization, we annotate the per-view/-frame accurate and consistent face meshes on large-scale face videos, called the NeuFace-dataset. We investigate how neural re-parameterization helps to reconstruct image-aligned facial details on 3D meshes via gradient analysis. By exploiting the naturalness and diversity of 3D faces in our dataset, we demonstrate the usefulness of our dataset for 3D face-related tasks: improving the reconstruction accuracy of an existing 3D face reconstruction model and learning 3D facial motion prior. Code and datasets will be publicly available if accepted. | /pdf/0ffddd57830cde500cd4114277d925fcdc17335f.pdf | md9JBrR4va | official_review | 1,698,785,449,631 | E6EbeJR20o | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission12/Reviewer_axFo"
] | summary: The paper presents NeuFace, an optimization algorithm for fitting a morphable model to a sequence of multi-view face images. To this end it refines a pre-trained NN to fit the target images. The optimized loss includes temporal and multi-view regularization terms minimizing the distance of the reconstructed mesh to the temporal moving average, in the case of the temporal term, and to the aligned average, in the multi-view loss case. The model is iteratively refined by alternating the estimation of the reconstructions used in the regularization terms with the optimization of the network parameter that minimize the loss.
The experimentation quantitatively compares the reconstructions with two competing algorithms, DECA and EMOCA, on MEAD, VoxCeleb2 and CelebV-HQ video datasets.
Finally, the algorithm is used to build the "NeuFace dataset" as the result of the reconstruction of the 3D face meshes in these three datasets.
soundness: 2 fair
presentation: 3 good
contribution: 2 fair
strengths: The paper reads well and is properly set in the research context. It addresses a relevant problem, namely, 3D face landmark estimation, with many practical applications and open challenges. The paper contributes with a new dataset and shows that by using it we may improve the accuracy of different face processing algorithms. This will be of interest to the face processing community.
weaknesses: The the paper claims to investigate the reconstruction of image-aligned facial details on 3D meshes. However, the approach is based on optimizing the parameters of a 3DMM model, with the limitations of a linear model to represent fine facial details.
In the vertex accuracy evaluation experiments described in Sec. 3.4 and shown in Fig. 4, NeuFace optimization is compared with DECA and plain FLAME fitting. The paper does not describe the details of this experiment, specifically what are the train, validation and test data used for evaluating each algorithm. DECA results were produced by the plain pre-trained DECA model. We may assume that NeuFace optimization, as described in Sec. 3.2, was trained with some part of VOCASET and evaluated on a different part of it. It does not seem like a fair comparison, since DECA did not have the chance to see any part of VOCASET.
For the same reason, the quantitative comparisons in Table 2 seem also unfair, since the optimizations in NEUFACE-*-datasets could see part of MEAD, VoxCeleb2 and CelebV-HQ data, whereas those involving DECA and EMOCA datasets did not. In Sec. A.3 and Table S1 we can see that if we give DECA the chance to be refined on these datasets, the NME in MEAD reduces to 2.44, much lower than 4.65 shown in Table 2.
questions: There are important details missing:
- Section 3.1 It would be good if you extended the explanation by adding the dimension of each FLAME parameter. Also, the backbone network, e.g. DECA, not only estimates the 3DMM parameters, but also texture, lighting and a displacement map to model details outside the 3DMM linear model. I understand that the approach discards the texture and lighting part, but what about the displacement map?
- Section 3.2 does not explain how the ground-truth landmarks for equation 2 were obtained. Also, in the Multi-view consistency loss it does not explain where the confidence values for each vertex come from.
- Experiments. The paper must clearly explain what is the train/validation/test data used in each experiment and confirm that the results shown in the accuracy evaluation in Fig.4 and Table 2 are correct and fair.
Often the base pre-trained model fails dramatically. In this situation, averaging the estimated mesh with others in the regularization terms would ruin the optimization, since the average operation is not robust. Would an alternative robust operation, e.g. median, improve the results?
flag_for_ethics_review: ['Yes, Privacy, security and safety', 'Yes, Legal compliance (e.g., GDPR, copyright, terms of use)']
details_of_ethics_concerns: The paper reads "Since our dataset is acquired based on the existing public video datasets (Wang et al., 2020; Chung et al., 2018; Zhu et al., 2022), all the rights, licenses, and permissions follow the original datasets." However, some of these datasets were automatically gathered from the internet. So, it is unclear whether the new dataset is legally compliant.
rating: 5: marginally below the acceptance threshold
confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
code_of_conduct: Yes |
E6EbeJR20o | A Large-Scale 3D Face Mesh Video Dataset via Neural Re-parameterized Optimization | [
"Kim Youwang",
"Lee Hyun",
"Kim Sung-Bin",
"Suekyeong Nam",
"Janghoon Ju",
"Tae-Hyun Oh"
] | We propose NeuFace, a 3D face mesh pseudo annotation method on videos via neural re-parameterized optimization. Despite the huge progress in 3D face reconstruction methods, generating reliable 3D face labels for in-the-wild dynamic videos remains challenging. Using NeuFace optimization, we annotate the per-view/-frame accurate and consistent face meshes on large-scale face videos, called the NeuFace-dataset. We investigate how neural re-parameterization helps to reconstruct image-aligned facial details on 3D meshes via gradient analysis. By exploiting the naturalness and diversity of 3D faces in our dataset, we demonstrate the usefulness of our dataset for 3D face-related tasks: improving the reconstruction accuracy of an existing 3D face reconstruction model and learning 3D facial motion prior. Code and datasets will be publicly available if accepted. | /pdf/0ffddd57830cde500cd4114277d925fcdc17335f.pdf | DC6e0UeMhb | official_review | 1,698,641,748,665 | E6EbeJR20o | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission12/Reviewer_GPao"
] | summary: In this proposed method, a 3D face database is built based on the neural radiance fields method. In general, the neural face representation tries to find the best 3D mesh representation through the neural network parameters to best fit the multiple views and temporal consistent faces. Multiview and temporal consistency losses are added on top of the 2D landmark loss in the EM-like optimization process. Based on the proposed method, a significantly larger 3D face database is built using existing public 3D videos. The authors also demonstrated the possible applications of the proposed database to improve 3D face reconstruction and to learn the 3D face prior.
soundness: 3 good
presentation: 3 good
contribution: 3 good
strengths: The proposed 3D face database is beneficial to the research community. The proposed database is significantly larger than the typical 3D face datasets. Experimental results also demonstrate good 3D estimation results.
weaknesses: The theoretical novelty of face reconstruction using Neural network parameterization is incremental.
questions: What are the typical failure cases of the proposed method? In the EM-like optimization, does the reconstruction always go to the right optional result?
flag_for_ethics_review: ['No ethics review needed.']
rating: 6: marginally above the acceptance threshold
confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
code_of_conduct: Yes |
E6EbeJR20o | A Large-Scale 3D Face Mesh Video Dataset via Neural Re-parameterized Optimization | [
"Kim Youwang",
"Lee Hyun",
"Kim Sung-Bin",
"Suekyeong Nam",
"Janghoon Ju",
"Tae-Hyun Oh"
] | We propose NeuFace, a 3D face mesh pseudo annotation method on videos via neural re-parameterized optimization. Despite the huge progress in 3D face reconstruction methods, generating reliable 3D face labels for in-the-wild dynamic videos remains challenging. Using NeuFace optimization, we annotate the per-view/-frame accurate and consistent face meshes on large-scale face videos, called the NeuFace-dataset. We investigate how neural re-parameterization helps to reconstruct image-aligned facial details on 3D meshes via gradient analysis. By exploiting the naturalness and diversity of 3D faces in our dataset, we demonstrate the usefulness of our dataset for 3D face-related tasks: improving the reconstruction accuracy of an existing 3D face reconstruction model and learning 3D facial motion prior. Code and datasets will be publicly available if accepted. | /pdf/0ffddd57830cde500cd4114277d925fcdc17335f.pdf | USsm9RVAsM | official_review | 1,698,638,237,250 | E6EbeJR20o | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission12/Reviewer_C76P"
] | summary: This article introduces a new video dataset with 3D face mesh pseudo-labels and provides a method for annotating spatio-temporally consistent 3D face meshes for existing multi-view facial video data. Based on the results provided by the authors, this dataset is valuable for related research.
soundness: 2 fair
presentation: 3 good
contribution: 3 good
strengths: The dataset introduced by the authors exhibits clear advantages in terms of data quantity, annotation accuracy, and spatio-temporal consistency, as evidenced by the provided data examples. These strengths are valuable for advancing research in the relevant field. Additionally, the optimization method proposed for achieving spatio-temporal consistency seems effective.
weaknesses: 1. Unfair Comparison: The comparison with methods like DECA and EMOCA, which operate on single-view data (DECA-dataset and EMOCA-dataset), cannot utilize multi-view information. It can be argued that the proposed method leverages more information by utilizing multi-view data. Therefore, comparing the proposed multi-view approach to these single-view reconstruction methods may not provide a fair evaluation.
2. The novelty is limited. The proposed temporal-consistency-loss and multi-view-consistency-loss seem more like separate regularizations (or averages) applied to pose, camera parameters, or face shape and expression coefficients to achieve reduced jitter in the reconstructed videos.
I have doubts about the effectiveness of the multi-view-consistency-loss. In the training set, only the MEAD dataset consists of multi-view video data, while VoxCeleb2 and CelebV-HQ have only single-view video data. Consequently, it appears that only the MEAD dataset can effectively leverage the multi-view consistency loss. Table 1 illustrates that MEAD comprises a mere 1% of the total duration, suggesting that the majority of the proposed NeuFace-dataset primarily derives from VoxCeleb2 and CelebV-HQ. In essence, it seems to be a data processing outcome achieved by applying inter-frame smoothing to existing methods. Although I appreciate the authors' effort and the contribution of NeuFace-dataset to the community, the paper's level of innovation may fall slightly below the standard typically expected at ICLR.
questions: 1. As mentioned in the paper, the proposed dataset contains a large amount of data, and the preliminary 3D mesh results generated based on DECA (EMOCA) may have errors. Have the authors considered how to filter out failed reconstruction results?
2. The quality of the reconstructed results for extreme facial expressions appears suboptimal. For instance, in Figure 5's top-left corner, where the open-mouth expression is depicted, the reconstruction of the mouth region does not seem consistent with the original input. Additionally, there appear to be imperfections in the reconstruction of closed-eye expressions.
3. Given the analysis above, while the dataset's scale is certainly commendable, there seems to be room for improvement in terms of reconstruction accuracy. It might be worthwhile for the authors to consider utilizing such data as annotations for 3D landmarks rather than 3D mesh data. Additionally, have the authors explored the possibility of applying their proposed method to a different face model, such as the Basel Face Model (BFM), or investigating alternative pre-trained models instead of DECA or EMOCA?
flag_for_ethics_review: ['No ethics review needed.']
rating: 5: marginally below the acceptance threshold
confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
code_of_conduct: Yes |
E6EbeJR20o | A Large-Scale 3D Face Mesh Video Dataset via Neural Re-parameterized Optimization | [
"Kim Youwang",
"Lee Hyun",
"Kim Sung-Bin",
"Suekyeong Nam",
"Janghoon Ju",
"Tae-Hyun Oh"
] | We propose NeuFace, a 3D face mesh pseudo annotation method on videos via neural re-parameterized optimization. Despite the huge progress in 3D face reconstruction methods, generating reliable 3D face labels for in-the-wild dynamic videos remains challenging. Using NeuFace optimization, we annotate the per-view/-frame accurate and consistent face meshes on large-scale face videos, called the NeuFace-dataset. We investigate how neural re-parameterization helps to reconstruct image-aligned facial details on 3D meshes via gradient analysis. By exploiting the naturalness and diversity of 3D faces in our dataset, we demonstrate the usefulness of our dataset for 3D face-related tasks: improving the reconstruction accuracy of an existing 3D face reconstruction model and learning 3D facial motion prior. Code and datasets will be publicly available if accepted. | /pdf/0ffddd57830cde500cd4114277d925fcdc17335f.pdf | cOkYwMmqbz | official_comment | 1,700,534,966,697 | md9JBrR4va | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission12/Authors"
] | title: Response to Reviewer axFo (Part 1/3)
comment: We thank the reviewer for the time and the thorough review. By addressing the reviewer’s questions and comments, we could strengthen our paper. We address the concerns and the questions below and in the revision (please check pdf, highlighted in pink).
We’d like to ask the reviewer to re-assess the value of our work with the following clarification.
Please let us know if our answers address the reviewer’s concerns. We would be happy to provide further discussions and clarifications.
### **Weaknesses**
---
> **W1. Claims to investigate facial details. The approach is based on 3DMM with limitations of a linear model to represent fine facial details.**
We would like to clarify the scope of our approach. This is a misleading point due to the term we used, which we have revised and tone-downed the terms across the paper in this revision.
Our work aims to *facial details* of facial gestures and motions, not mesoscopic facial geometry of facial skin, which is the point the misleading happens at. Thus, we do not reconstruct the displacement-level facial details which is not the target of our work. We focus on 3D facial geometries complying with input facial gestures and motions.
We revised the overall paper and tone-down our claim of “*reconstructing image-aligned facial details*” into “***reconstructing facial geometries, well complying with input facial gestures and motions***”, highlighted in pink. Thanks for the comment for specifying our scope better.
---
> **W2. Concerns on fair comparison and missing descriptions of experiments (Sec. 3.4, Fig. 4).**
We respectfully note that this concern misled by the reviewer's misunderstanding of the premise of our work. We hope the explanation below clearly addresses the points leading to the reviewer's misconcern.
First of all, the evaluation of our NeuFace test-time optimization is indeed fair, which we carefully designed, as all the other test-time approaches are. Specifically,
1. Our NeuFace optimization, as elaborated in Sec. 3.2, is a TEST-TIME approach and does not involve training with train/validation/test splits. Our NeuFace step itself only exploits the input samples at test-time, but no additional data is used for the NeuFace itself.
2. We use the the pre-trained base model in NeuFace, e.g., DECA, for test-time fine-tuning on the input samples at test-time. Thus, the comparison with the pre-trained base model and our NeuFace (base model + NeuFace test-time optimization) is indeed in the fair setting; they have seen the exactly same data).
3. Our NeuFace is evaluated on each sample sequence independently. In Sec.3.4 and Fig. 4, for NeuFace optimization, we do not split the VOCASET into train/validation/test splits. We performed the test-time optimization for each sequence of VOCASET, starting from the pre-trained DECA checkpoint (pre-trained on VGGFace2 [C1], BUPT-Balancedface [C2], VoxCeleb2 [C3]), and detected 2D landmarks (we didn't use any GT information including mesh or 2D landmarks from VOCASET). Note that 2D landmark detections are not the ground-truth, as well. Thus, we claim that both DECA and ours do not have the chance to see any test dataset.
4. Likewise, our method is evaluated and compared for each test samples in each dataset (including VOCASET, MEAD, Voxceleb, and Celebv-HQ) independently without using their ground-truth mesh.
Our goal of experiments in Sec 3.4 and Fig. 4 was to compare the quality of pseudo 3D mesh annotations obtained from existing methods. In other words, the experiment in Sec 3.4 and Fig. 4 did not aim to compare the performance of each trained model on test datasets.
In the 3D face community, recent works [C4,C5] still naively use a pre-trained DECA or FLAME fitting (baseline) for annotator. In our experiments, we wanted to show how accurate and reliable NeuFace optimization is in generating pseudo 3D face annotations compared to existing methods.
To conclude, **NeuFace is not a learned model but a test-time optimization method; thus, it does not require training/validation/test sets.** Therefore, we confirm that our experiments were correct and fair, which we carefully designed.
[C1] Cao et al., VGGFace2: A dataset for recognising faces across pose and age. In International Conference on Automatic Face & Gesture Recognition (FG) 2018.
[C2] Wang et al., Racial Faces in the Wild: Reducing Racial Bias by Information Maximization Adaptation Network. In ICCV 2019.
[C3] Chung et al., VoxCeleb2: Deep Speaker Recognition. In INTERSPEECH 2018.
[C4] Ng et al., Learning to listen: Modeling non-deterministic dyadic facial motion. In CVPR 2022.
[C5] Paraperas et al., Neural Emotion Director: Speech-preserving semantic control of facial expressions in "in-the-wild" videos. In CVPR 2022. |
E6EbeJR20o | A Large-Scale 3D Face Mesh Video Dataset via Neural Re-parameterized Optimization | [
"Kim Youwang",
"Lee Hyun",
"Kim Sung-Bin",
"Suekyeong Nam",
"Janghoon Ju",
"Tae-Hyun Oh"
] | We propose NeuFace, a 3D face mesh pseudo annotation method on videos via neural re-parameterized optimization. Despite the huge progress in 3D face reconstruction methods, generating reliable 3D face labels for in-the-wild dynamic videos remains challenging. Using NeuFace optimization, we annotate the per-view/-frame accurate and consistent face meshes on large-scale face videos, called the NeuFace-dataset. We investigate how neural re-parameterization helps to reconstruct image-aligned facial details on 3D meshes via gradient analysis. By exploiting the naturalness and diversity of 3D faces in our dataset, we demonstrate the usefulness of our dataset for 3D face-related tasks: improving the reconstruction accuracy of an existing 3D face reconstruction model and learning 3D facial motion prior. Code and datasets will be publicly available if accepted. | /pdf/0ffddd57830cde500cd4114277d925fcdc17335f.pdf | RdoegQq0JB | official_comment | 1,700,535,659,849 | md9JBrR4va | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission12/Authors"
] | title: Response to Reviewer axFo (Part 2/3)
comment: > **W3. Unfair comparison in Table 2, Sec A.3., & Table S1.**
Our goal in Table 2 was to compare the quality of our pseudo 3D mesh annotations (NeuFace-D-dataset, NeuFace-E-dataset) with those obtained from existing methods (Base-dataset, DECA-dataset, EMOCA-dataset).
**We emphasize that both DECA and ours do not have the chance to see any test dataset,** given the fact that 1) for Table 2, we do NOT train models using MEAD, VoxCeleb2, and CelebV-HQ at all, and 2) 2D landmark detections are not the ground-truth but estimated from the off-the-shelf module.
In Sec. A.3 and Table S1, we have compared the dataset quality obtained with our different TEST-TIME loss configurations. When we use only $\mathcal{L}_\text{2D}$ to refine the DECA estimations, the NME indeed improves than DECA’s initial prediction. However, as mentioned in [W2], NeuFace optimization is not a learned nor a fine-tuned model on MEAD, VoxCeleb2, and CelebV-HQ datasets. NeuFace optimization is a test-time optimization method for fitting 3D face for each video. Also, the 2D landmark detections itself is a pseudo ground-truth. Therefore, we conclude that our experiment for Table S1 was correct and fair as well.
### **Questions**
---
> **Q1-1. It would be good if you extended the explanation by adding the dimension of each FLAME parameter.**
Thanks for the thoughtful comment. We have added the detailed explanation in the revision (Sec. 3.1), highlighted in pink, as:
"We use FLAME, a renowned 3DMM, as a 3D face representation. 3D face mesh vertices $\mathbf{M}$ and facial landmarks $\mathbf{J}$ for $F$ frame videos can be acquired with the differentiable skinning: $\mathbf{M}, \mathbf{J}{=}\texttt{FLAME}(\boldsymbol{\mathbf{r}, \boldsymbol{\theta}, \boldsymbol{\beta}, \boldsymbol{\psi}})$, __where $\mathbf{r}\in{\mathbb{R}^{3}}$, $\boldsymbol{\theta}\in{\mathbb R}^{12}$, $\boldsymbol{\beta}\in{\mathbb R}^{100}$ and $\boldsymbol{\psi}\in{\mathbb R}^{50}$__ denote the head orientation, face poses, face shape and expression coefficients, respectively."
> **Q1-2. Considering the displacement map in addition to the FLAME parameter?**
We think optimizing the mesoscopic detailed face geometry by re-parameterizing the displacement map is definitely an interesting future direction.
As discussed in [W1], the face skin detail reconstruction is not our scope of this work, but “***reconstructing facial geometries, well complying with input facial gestures and motions.***” More importantly, we are motivated by the fact that there are lack of existing large-scale in-the-wild or multi-view video datasets that contain high-level face geometry, head motion, identity, and expressions that comply with input videos, in the community.
We believe our NeuFace dataset, which contains large-scale, diverse, natural, high-level human 3D face motion, would invigorate the 3D face community.
---
> **Q2-1. Sec. 3.2: how were the ground-truth landmarks for equation 2 obtained?**
We used the off-the-shelf 2D landmark detection model, FAN [C6]. We also performed manual human verifications to reject the failure cases when constructing the NeuFace-dataset.
We have added this to the revision (Sec 3.2, second paragraph, page 4), highlighted in pink. Thanks for checking the details.
[C6] Face Recognition (FAN), https://github.com/1adrianb/face-alignment
> **Q2-2. In the Multi-view consistency loss, where do the confidence values come from?**
We assign the confidence score per vertex by measuring the angle between the vertex normal and the camera ray. We set the vertices as invisible if the angle is larger than the threshold $\tau_a$, and the vertex has a deeper depth than $\tau_{z}$ (i.e., $z<\tau_z$). We empirically choose $\tau_a = 72^\circ$, $\tau_z=-0.08$. We have revised the paper accordingly (p5, Sec 3.2), highlighted in pink. |
E6EbeJR20o | A Large-Scale 3D Face Mesh Video Dataset via Neural Re-parameterized Optimization | [
"Kim Youwang",
"Lee Hyun",
"Kim Sung-Bin",
"Suekyeong Nam",
"Janghoon Ju",
"Tae-Hyun Oh"
] | We propose NeuFace, a 3D face mesh pseudo annotation method on videos via neural re-parameterized optimization. Despite the huge progress in 3D face reconstruction methods, generating reliable 3D face labels for in-the-wild dynamic videos remains challenging. Using NeuFace optimization, we annotate the per-view/-frame accurate and consistent face meshes on large-scale face videos, called the NeuFace-dataset. We investigate how neural re-parameterization helps to reconstruct image-aligned facial details on 3D meshes via gradient analysis. By exploiting the naturalness and diversity of 3D faces in our dataset, we demonstrate the usefulness of our dataset for 3D face-related tasks: improving the reconstruction accuracy of an existing 3D face reconstruction model and learning 3D facial motion prior. Code and datasets will be publicly available if accepted. | /pdf/0ffddd57830cde500cd4114277d925fcdc17335f.pdf | 5X2IPKljTN | official_comment | 1,700,535,812,266 | md9JBrR4va | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission12/Authors"
] | title: Response to Reviewer axFo (Part 3/3)
comment: > **Q3. Would an alternative robust operation, e.g., median, improve the results?**
Following the reviewer's suggestion, we have compared the mean, median, and weighted average (ours) in the multi-view loss. We have compared the results on two scenarios: 1) Standard cases (use original images) and 2) Extreme cases (use perturbated images). For extreme cases, we randomly apply perturbation with large black boxes to the facial areas for 2~3 views in the multi-view videos to mimic significant corruption scenarios.
| Loss configuration | Standard cases (CVD)↓ | Extreme cases (CVD)↓ |
| --- | --- | --- |
| Average | 0.106 | 0.124 |
| Median | 0.104 | 0.112 |
| **Weighted average (Ours)** | 0.103 | 0.113 |
- Results of standard cases: All three methods (mean, median, and weighted average) showed similar performance, with no marked difference in optimization results evaluated on the MEAD subset.
- Results of extreme cases: The median outperforms the mean in perturbed MEAD data.
- Our Method: Our methodology, which employs a weighted average grounded in the confidence scores from multiple view vertices, not only shows favorable performance over the simple average but also aligns closely with the median's results. This indicates that our approach maintains robustness in extreme cases, akin to the median operation highlighted by the reviewer.
The reviewer's suggestion about the median was insightful. Both our weighted average and the median work for extreme scenarios. One difference is that our visibility-based weighted average has a control parameter to adjust the robustness through hyperparameters (as in Q2-2.).
---
> **Q4. Ethics Concerns: Some of these datasets were automatically gathered from the internet. So, it is unclear whether the new dataset is legally compliant.**
We will not release the video dataset itself, but will release only the optimized 3DMM parameters obtained by our method without the video frames that might have been gathered from the internet.
The optimized 3DMM parameter does not contain identity-specific metadata or facial texture maps. Also, we will release the code that can generate pseudo ground-truth datasets like the NeuFace dataset for generic applications. |
E6EbeJR20o | A Large-Scale 3D Face Mesh Video Dataset via Neural Re-parameterized Optimization | [
"Kim Youwang",
"Lee Hyun",
"Kim Sung-Bin",
"Suekyeong Nam",
"Janghoon Ju",
"Tae-Hyun Oh"
] | We propose NeuFace, a 3D face mesh pseudo annotation method on videos via neural re-parameterized optimization. Despite the huge progress in 3D face reconstruction methods, generating reliable 3D face labels for in-the-wild dynamic videos remains challenging. Using NeuFace optimization, we annotate the per-view/-frame accurate and consistent face meshes on large-scale face videos, called the NeuFace-dataset. We investigate how neural re-parameterization helps to reconstruct image-aligned facial details on 3D meshes via gradient analysis. By exploiting the naturalness and diversity of 3D faces in our dataset, we demonstrate the usefulness of our dataset for 3D face-related tasks: improving the reconstruction accuracy of an existing 3D face reconstruction model and learning 3D facial motion prior. Code and datasets will be publicly available if accepted. | /pdf/0ffddd57830cde500cd4114277d925fcdc17335f.pdf | zPhSZsp5Nc | official_comment | 1,700,536,022,567 | DC6e0UeMhb | [
"everyone"
] | [
"ICLR.cc/2024/Conference/Submission12/Authors"
] | title: Response to Reviewer GPao
comment: We thank the reviewer for the interest in our work and the valuable feedback that strengthens our paper. We address the concern below and in the revision (please check pdf, highlighted in pink).
### **Weaknesses**
---
> **W1. The theoretical novelty of face reconstruction using Neural network parameterization is incremental.**
We haven’t claimed the theoretical novelty itself.
The theoretical context in Sec. A.2 helps the deep understanding of the analysis and algorithmic behavior of our proposed optimization system. We respectfully request the reviewer to consider our contributions below:
- Proposing the first and insightful concept of neural re-parameterized 3D face optimization, which mitigates the undesirable sparse gradient for face optimization (acknowledged by Reviewer B8S8).
- Providing a NeuFace-dataset, the first large-scale 3D face mesh pseudo-labels for existing large-scale 2D face video datasets. The dataset would benefit future research in this field (acknowledged by all the other reviewers).
- Extensive experiments to compare the quality and reliability of the proposed optimization and the dataset, and some empirical analysis.
### **Questions**
---
> **Q1. What are the typical failure cases of the proposed method?**
The failure cases could occur when the 2D video itself contains extreme degradations, e.g., motion blur, low resolution, extremely (> 50%) occluded, so that the 2D keypoint detection fails. Please note that, when we construct the NeuFace-dataset, we tackle these cases with automatic filtering followed by human verification, discussed in Appendix Sec. B of the initial submssion, which guarantees the reliability of the dataset.
---
> **Q2. In the EM-like optimization, does the reconstruction always go to the right optional result?**
The NeuFace optimization and its losses are designed in a self-improving manner. Although the initial estimate of DECA could be noisy, the strong measurement of detected 2D landmarks and robust target supervision at each iteration gradually corrects the initial noisy predictions. The theoretical analysis shows that our neural re-parameterized optimization is highly likely to converge to global optima (Appendix Sec. A.2 of the initial submission). This hints that, with our neural parameterization, the optimization is robust to noisy initials and guarantees to exhibit at least stabler optimization behaviors than the compared baseline methods. |
Subsets and Splits