bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
215
445
abstract
stringlengths
820
2.37k
title
stringlengths
24
147
authors
sequencelengths
1
13
id
stringclasses
1 value
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
33 values
n_linked_authors
int64
-1
4
upvotes
int64
-1
21
num_comments
int64
-1
4
n_authors
int64
-1
11
Models
sequencelengths
0
1
Datasets
sequencelengths
0
1
Spaces
sequencelengths
0
4
old_Models
sequencelengths
0
1
old_Datasets
sequencelengths
0
1
old_Spaces
sequencelengths
0
4
paper_page_exists_pre_conf
int64
0
1
null
https://openreview.net/forum?id=OTjo1q8rWL
@inproceedings{ liu2024insvp, title={Ins{VP}: Efficient Instance Visual Prompting from Image Itself}, author={Zichen Liu and Yuxin Peng and Jiahuan Zhou}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=OTjo1q8rWL} }
Visual prompting is an efficient methodology for finetuning pretrained visual models by introducing a small number of learnable parameters while keeping the backbone frozen. However, most existing visual prompting methods learn a shared prompt for all samples, making it challenging to grasp distinct characteristics among diverse samples, thereby limiting the model's performance. While other methods partially address this issue through sample clustering and learning multiple prompts, they still struggle to capture nuanced differences among instances and incur significant parameter overhead. Therefore, to comprehensively and efficiently leverage discriminative characteristics of individual instances, we propose an Instance Visual Prompting method, called InsVP. Initially, the instance image prompt is introduced to extract both crucial and nuanced discriminative information from the original image itself and is overlaid onto the input image. Furthermore, the instance feature prompt is designed to capture both commonalities and characteristics among individual instances, fed into the model's intermediate layers to facilitate feature extraction. Consequently, the instance image and feature prompts complement each other, enhancing the adaptation ability of pretrained models to extract discriminative features from individual instances. Extensive experiments on various large-scale benchmarks show that our InsVP achieves superior performance exceeding the state-of-the-art methods at a lower parameter cost. Our code will be released.
InsVP: Efficient Instance Visual Prompting from Image Itself
[ "Zichen Liu", "Yuxin Peng", "Jiahuan Zhou" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=OQ9dB9s7P2
@inproceedings{ wang2024stdface, title={S2{TD}-Face: Reconstruct a Detailed 3D Face with Controllable Texture from a Single Sketch}, author={Zidu Wang and Xiangyu Zhu and Jiang Yu and Tianshuo Zhang and Zhen Lei}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=OQ9dB9s7P2} }
3D textured face reconstruction from sketches applicable in many scenarios such as animation, 3D avatars, artistic design, missing people search, etc., is a highly promising but underdeveloped research topic. On the one hand, the stylistic diversity of sketches leads to existing sketch-to-3D-face methods only being able to handle pose-limited and realistically shaded sketches. On the other hand, texture plays a vital role in representing facial appearance, yet sketches lack this information, necessitating additional texture control in the reconstruction process. This paper proposes a novel method for reconstructing controllable textured and detailed 3D faces from sketches, named S2TD-Face. S2TD-Face introduces a two-stage geometry reconstruction framework that directly reconstructs detailed geometry from the input sketch. To keep geometry consistent with the delicate strokes of the sketch, we propose a novel sketch-to-geometry loss that ensures the reconstruction accurately fits the input features like dimples and wrinkles. Our training strategies do not rely on hard-to-obtain 3D face scanning data or labor-intensive hand-drawn sketches. Furthermore, S2TD-Face introduces a texture control module utilizing text prompts to select the most suitable textures from a library and seamlessly integrate them into the geometry, resulting in a 3D detailed face with controllable texture. S2TD-Face surpasses existing state-of-the-art methods in extensive quantitative and qualitative experiments. The code will be publicly available.
S2TD-Face: Reconstruct a Detailed 3D Face with Controllable Texture from a Single Sketch
[ "Zidu Wang", "Xiangyu Zhu", "Jiang Yu", "Tianshuo Zhang", "Zhen Lei" ]
Conference
poster
2408.01218
[ "https://github.com/wang-zidu/s2td-face" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=OM2GI7hUEc
@inproceedings{ zhang2024generative, title={Generative Motion Stylization of Cross-structure Characters within Canonical Motion Space}, author={Jiaxu Zhang and Xin Chen and Gang Yu and Zhigang Tu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=OM2GI7hUEc} }
Stylized motion breathes life into characters. However, the fixed skeleton structure and style representation hinder existing data-driven motion synthesis methods from generating stylized motion for various characters. In this work, we propose a generative motion stylization pipeline, named MotionS, for synthesizing diverse and stylized motion on cross-structure characters using cross-modality style prompts. Our key insight is to embed motion style into a cross-modality latent space and perceive the cross-structure skeleton topologies, allowing for motion stylization within a canonical motion space. Specifically, the large-scale Contrastive-Language-Image-Pre-training (CLIP) model is leveraged to construct the cross-modality latent space, enabling flexible style representation within it. Additionally, two topology-encoded tokens are learned to capture the canonical and specific skeleton topologies, facilitating cross-structure topology shifting. Subsequently, the topology-shifted stylization diffusion is designed to generate motion content for the particular skeleton and stylize it in the shifted canonical motion space using multi-modality style descriptions. Through an extensive set of examples, we demonstrate the flexibility and generalizability of our pipeline across various characters and style descriptions. Qualitative and quantitative comparisons show the superiority of our pipeline over state-of-the-art methods, consistently delivering high-quality stylized motion across a broad spectrum of skeletal structures.
Generative Motion Stylization of Cross-structure Characters within Canonical Motion Space
[ "Jiaxu Zhang", "Xin Chen", "Gang Yu", "Zhigang Tu" ]
Conference
poster
2403.11469
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=O9gzkYvNKp
@inproceedings{ zhang2024imbalanced, title={Imbalanced Multi-instance Multi-label Learning via Coding Ensemble and Adaptive Thresholds}, author={Xinyue Zhang and Tingjin Luo and liuyueying and Chenping Hou}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=O9gzkYvNKp} }
Multi-instance multi-label learning (MIML), which deals with objects with complex structures and multiple semantics, plays a crucial role in various fields. In practice, the naturally skewed label distribution and label dependence contribute to the issue of label imbalance in MIML, which is crucial but rarely studied. Most existing MIML methods often produce biased models due to the ignorance of inter-class variations in imbalanced data. To address this issue, we propose a novel imbalanced multi-instance multi-label learning method named IMIMLC, based on the error-correcting coding ensemble and an adaptive threshold strategy. Specifically, we design a feature embedding method to extract the structural information of each object via Fisher vectors and eliminate inexact supervision. Subsequently, to alleviate the disturbance caused by the imbalanced distribution, a novel ensemble model is constructed by concatenating the error-correcting codes of randomly selected subtasks. Meanwhile, IMIMLC trains binary base classifiers on small-scale data blocks partitioned by our codes to enhance their diversity and then learns more reliable results to improve model robustness for the imbalance issue. Furthermore, IMIMLC adaptively learns thresholds for each individual label by margin maximization, preventing inaccurate predictions caused by the semantic discrepancy across many labels and their unbalanced ratios. Finally, extensive experimental results on various datasets validate the effectiveness of IMIMLC against state-of-the-art approaches.
Imbalanced Multi-instance Multi-label Learning via Coding Ensemble and Adaptive Thresholds
[ "Xinyue Zhang", "Tingjin Luo", "liuyueying", "Chenping Hou" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=O9Vuj6lzya
@inproceedings{ chen2024holisticcam, title={Holistic-{CAM}: Ultra-lucid and Sanity Preserving Visual Interpretation in Holistic Stage of {CNN}s}, author={Pengxu Chen and Huazhong Liu and Jihong Ding and Jiawen Luo and Peng Tan and Laurence T. Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=O9Vuj6lzya} }
As the visual interpretations for convolutional neural networks (CNNs), backpropagation attribution methods have been garnering growing attention. Nevertheless, majority of those methods merely concentrate on the ultimate convolutional layer, leading to tiny and concentrated interpretations that fail to adequately clarify the model-central attention. Therefore, we propose a precise attribution method (i. e., Holistic-CAM) for high-definition visual interpretation in the holistic stage of CNNs. Specifically, we first present weighted positive gradients to guarantee the sanity of interpretations in shallow layers and leverage multi-scale fusion to improve the resolution across the holistic stage. Then, we further propose fundamental scale denoising to eliminate the faithless attribution originated from fusing larger-scale components. The proposed method is capable of simultaneously rendering fine-grained and faithful attribution for CNNs from shallow to deep layers. Extensive experimental results demonstrate that Holistic-CAM outperforms state-of-the-art methods on common-used benchmarks, including deletion and insertion, energy-based point game as well as remove and debias on ImageNet-1k, it also passes the sanity check easily.
Holistic-CAM: Ultra-lucid and Sanity Preserving Visual Interpretation in Holistic Stage of CNNs
[ "Pengxu Chen", "Huazhong Liu", "Jihong Ding", "Jiawen Luo", "Peng Tan", "Laurence T. Yang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=O46oXPBpYO
@inproceedings{ yang2024multiinstance, title={Multi-Instance Multi-Label Learning for Text-motion Retrieval}, author={Yang Yang and LiyuanCao and Haoyu Shi and Huaiwen Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=O46oXPBpYO} }
Text-motion retrieval (TMR) is a significant cross-modal task, aiming at retrieving semantically similar motion sequences with the given query text. Existing studies primarily focus on representing and aligning the text and motion sequence with the single embeddings. However, in the real-world, the motion sequence usually consists of multiple atomic motions with complicated semantics. This simple approach may hardly capture the complex relation and abundant semantics in the text and motion sequence. In addition, most atomic motions may co-occur and be coupled together, which further brings considerable challenges in modeling and aligning query and motion sequences. In this paper, we regard TMR as a multi-instance multi-label learning (MIML) problem, where the motion sequence is viewed as a bag of atomic motion and the text is the bag of corresponding phrase description. To address the MIML problem, we propose a novel multi-granularity semantics interaction (MGSI) approach to capture and align the semantics of text and motion sequences at various scales. Specifically, the MGSI first decomposes the query and motion sequences into three levels of event (bag), actions (instances), and entities, respectively. After that, we adopt graph neural networks (GNNs) to explicitly model their semantic correlation and perform semantic interaction at corresponding scales to align text and motion. In addition, we introduce a co-occurred motion mining approach that adopts the semantic consistency between the atomic motions as measurement to identify the co-occurred atomic motions. These co-occurred atomic motions are fused and interacted with corresponding text to achieve a precise cross-modal alignment. We evaluate our methods on the HumanML3D and KIT-ML datasets, in which we achieve improvements in Rsum of 23.09\% on HumanML3D and 21.84\% on KIT-ML.
Multi-Instance Multi-Label Learning for Text-motion Retrieval
[ "Yang Yang", "LiyuanCao", "Haoyu Shi", "Huaiwen Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=NrHuNScX2p
@inproceedings{ su2024soil, title={{SOIL}: Contrastive Second-Order Interest Learning for Multimodal Recommendation}, author={Hongzu Su and Jingjing Li and Fengling Li and Ke Lu and Lei Zhu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=NrHuNScX2p} }
Mainstream multimodal recommender systems are designed to learn user interest by analyzing user-item interaction graphs. However, what they learn about user interest needs to be completed because historical interactions only record items that best match user interest (i.e., the first-order interest), while suboptimal items are absent. To fully exploit user interest, we propose a Second-Order Interest Learning (SOIL) framework to retrieve second-order interest from unrecorded suboptimal items. In this framework, we build a user-item interaction graph augmented by second-order interest, an interest-aware item-item graph for the visual modality, and a similar graph for the textual modality. In our work, all three graphs are constructed from user-item interaction records and multimodal feature similarity. Similarly to other graph-based approaches, we apply graph convolutional networks to each of the three graphs to learn representations of users and items. To improve the exploitation of both first-order and second-order interest, we optimize the model by implementing contrastive learning modules for user and item representations at both the user-item and item-item levels. The proposed framework is evaluated on three real-world public datasets in online shopping scenarios. Experimental results verify that our method is able to significantly improve prediction performance. For instance, our method outperforms the previous state-of-the-art method MGCN by an average of $8.1\%$ in terms of Recall@10.
SOIL: Contrastive Second-Order Interest Learning for Multimodal Recommendation
[ "Hongzu Su", "Jingjing Li", "Fengling Li", "Ke Lu", "Lei Zhu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=NqXxNrMU6T
@inproceedings{ kosugi2024promptguided, title={Prompt-Guided Image-Adaptive Neural Implicit Lookup Tables for Interpretable Image Enhancement}, author={Satoshi Kosugi}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=NqXxNrMU6T} }
In this paper, we delve into the concept of interpretable image enhancement, a technique that enhances image quality by adjusting filter parameters with easily understandable names such as “Exposure” and “Contrast”. Unlike using predefined image editing filters, our framework utilizes learnable filters that acquire interpretable names through training. Our contribution is two-fold. Firstly, we introduce a novel filter architecture called an image-adaptive neural implicit lookup table, which uses a multilayer perceptron to implicitly define the transformation from input feature space to output color space. By incorporating image-adaptive parameters directly into the input features, we achieve highly expressive filters. Secondly, we introduce a prompt guidance loss to assign interpretable names to each filter. We evaluate visual impressions of enhancement results, such as exposure and contrast, using a vision and language model along with guiding prompts. We define a constraint to ensure that each filter affects only the targeted visual impression without influencing other attributes, which allows us to obtain the desired filter effects. Experimental results show that our method outperforms existing predefined filter-based methods, thanks to the filters optimized to predict target results. We will make our code publicly available upon acceptance.
Prompt-Guided Image-Adaptive Neural Implicit Lookup Tables for Interpretable Image Enhancement
[ "Satoshi Kosugi" ]
Conference
poster
2408.11055
[ "https://github.com/satoshi-kosugi/pg-ia-nilut" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=NndGM6iDip
@inproceedings{ wang2024finegrained, title={Fine-grained Semantic Alignment with Transferred Person-{SAM} for Text-based Person Retrieval}, author={Yihao Wang and Meng Yang and Rui Cao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=NndGM6iDip} }
Addressing the disparity in description granularity and information gap between images and text has long been a formidable challenge in text-based person retrieval (TBPR) tasks. Recent researchers tried to solve this problem by random local alignment. However, they failed to capture the fine-grained relationships between images and text, so the information and modality gaps remain on the table. We align image regions and text phrases at the same semantic granularity to address the semantic atomicity gap. Our idea is first to extract and then exploit the relationships between fine-grained locals. We introduce a novel Fine-grained Semantic Alignment with Transferred Person-SAM (SAP-SAM) approach. By distilling and transferring knowledge, we propose a Person-SAM model to extract fine-grained semantic concepts at the same granularity from images and texts of TBPR and its relationships. With the extracted knowledge, we optimize the fine-grained matching via Explicit Local Concept Alignment and Attentive Cross-modal Decoding to discriminate fine-grained image and text features at the same granularity level and represent the important semantic concepts from both modalities, effectively alleviating the granularity and information gaps. We evaluate our proposed approach on three popular TBPR datasets, demonstrating that SAP-SAM achieves state-of-the-art results and underscores the effectiveness of end-to-end fine-grained local alignment in TBPR tasks.
Fine-grained Semantic Alignment with Transferred Person-SAM for Text-based Person Retrieval
[ "Yihao Wang", "Meng Yang", "Rui Cao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Nj06TDhrkL
@inproceedings{ huang2024portrait, title={Portrait Shadow Removal via Self-Exemplar Illumination Equalization}, author={Qian Huang and Cheng Xu and Guiqing Li and Ziheng Wu and Shengxin Liu and Shengfeng He}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Nj06TDhrkL} }
We introduce the Self-Exemplar Illumination Equalization Network, designed specifically for effective portrait shadow removal. The core idea of our method is that partially shadowed portraits can find ideal exemplars within their non-shadowed facial regions. Rather than directly fusing two distinct classes of facial features, our approach utilizes non-shadowed regions as an illumination indicator to equalize the shadowed regions, generating deshadowed results without boundary-merging artifacts. Our network comprises cascaded Self-Exemplar Illumination Equalization Blocks (SExmBlock), each containing two modules: a self-exemplar feature matching module and a feature-level illumination rectification module. The former identifies and applies internal illumination exemplars to shadowed areas, producing illumination-corrected features, while the latter adjusts shadow illumination by reapplying the illumination factors from these features to the input face. Applying this series of SExmBlocks to shadowed portraits incrementally eliminates shadows and preserves clear, accurate facial details. The effectiveness of our method is demonstrated through evaluations on two public shadow portrait datasets, where it surpasses existing state-of-the-art methods in both qualitative and quantitative assessments.
Portrait Shadow Removal via Self-Exemplar Illumination Equalization
[ "Qian Huang", "Cheng Xu", "Guiqing Li", "Ziheng Wu", "Shengxin Liu", "Shengfeng He" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=NiXtmdgIOz
@inproceedings{ wang2024capsadapter, title={CapS-Adapter: Caption-based MultiModal Adapter in Zero-Shot Classification}, author={Qijie Wang and Liu Guandu and Bin Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=NiXtmdgIOz} }
Recent advances in vision-language foundational models, such as CLIP, have demonstrated significant strides in zero-shot classification. However, the extensive parameterization of models like CLIP necessitates a resource-intensive fine-tuning process. In response, TIP-Adapter and SuS-X have introduced training-free methods aimed at bolstering the efficacy of downstream tasks. While these approaches incorporate support sets to maintain data distribution consistency between knowledge cache and test sets, they often fall short in terms of generalization on the test set, particularly when faced with test data exhibiting substantial distributional variations. In this work, we present CapS-Adapter, an innovative method that employs a caption-based support set, effectively harnessing both image and caption features to exceed existing state-of-the-art techniques in training-free scenarios. CapS-Adapter adeptly constructs support sets that closely mirror target distributions, utilizing instance-level distribution features extracted from multimodal large models. By leveraging CLIP's single and cross-modal strengths, CapS-Adapter enhances predictive accuracy through the use of multimodal support sets. Our method achieves outstanding zero-shot classification results across 19 benchmark datasets, improving accuracy by 2.19\% over the previous leading method. Our contributions are substantiated through extensive validation on multiple benchmark datasets, demonstrating superior performance and robust generalization capabilities.
CapS-Adapter: Caption-based MultiModal Adapter in Zero-Shot Classification
[ "Qijie Wang", "Liu Guandu", "Bin Wang" ]
Conference
poster
2405.16591
[ "https://github.com/wluli/caps-adapter" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=NaUbjH7QEL
@inproceedings{ liu2024emphasizing, title={Emphasizing Semantic Consistency of Salient Posture for Speech-Driven Gesture Generation}, author={Fengqi Liu and Hexiang Wang and Jingyu Gong and Ran Yi and Qianyu Zhou and Xuequan Lu and Jiangbo Lu and Lizhuang Ma}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=NaUbjH7QEL} }
Speech-driven gesture generation aims at synthesizing a gesture sequence synchronized with the input speech signal. Previous methods leverage neural networks to directly map a compact audio representation to the gesture sequence, ignoring the semantic association of different modalities and failing to deal with salient gestures. In this paper, we propose a novel speech-driven gesture generation method by emphasizing the semantic consistency of salient posture. Specifically, we first learn a joint manifold space for the individual representation of audio and body pose to exploit the inherent semantic association between the two modalities, and propose to enforce semantic consistency via a consistency loss. Furthermore, we emphasize the semantic consistency of salient postures by introducing a weakly-supervised detector to identify salient postures, and reweighting the consistency loss to focus more on learning the correspondence between salient postures and the high-level semantics of speech content. In addition, we propose to extract audio features dedicated to facial expression and body gesture separately, and design separate branches for face and body gesture synthesis. Extensive experiments and visualization results demonstrate the superiority of our method over the state-of-the-art approaches.
Emphasizing Semantic Consistency of Salient Posture for Speech-Driven Gesture Generation
[ "Fengqi Liu", "Hexiang Wang", "Jingyu Gong", "Ran Yi", "Qianyu Zhou", "Xuequan Lu", "Jiangbo Lu", "Lizhuang Ma" ]
Conference
poster
2410.13786
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=NVzON5t3iz
@inproceedings{ jiang2024counterfactually, title={Counterfactually Augmented Event Matching for De-biased Temporal Sentence Grounding}, author={Xun Jiang and Zhuoyuan Wei and Shenshen Li and Xing Xu and Jingkuan Song and Heng Tao Shen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=NVzON5t3iz} }
Temporal Sentence Grounding (TSG), which aims to localize events in untrimmed videos with a given language query, has been widely studied in the last decades. However, recently researchers have demonstrated that previous approaches are severely limited in out- of-distribution generalization, thus proposing the De-biased TSG challenge which requires models to overcome weakness towards outlier test samples. In this paper, we design a novel framework, termed Counterfactually-Augmented Event Matching (CAEM), which incorporates counterfactual data augmentation to learn event-query joint representations to resist the training bias. Specifically, it consists of three components: (1) A Temporal Counterfactual Augmentation module that generates counterfactual video-text pairs by temporally delaying events in the untrimmed video, enhancing the model’s capacity for counterfactual thinking. (2) An Event-Query Matching model that is used to learn joint representations and predict corresponding matching scores for each event candidate. (3) A Counterfact-Adaptive Framework (CAF) that incorporates the counterfactual consistency rules on the matching process of the same event-query pairs, furtherly mitigating the bias learned from training sets. We conduct thorough experiments on two widely used DTSG datasets, i.e., Charades-CD and ActivityNet-CD, to evaluate our proposed CAEM method. Extensive experimental results show our proposed CAEM method outperforms recent state-of-the-art methods on all datasets. Our implementation code is available at https://github.com/CFM-MSG/CAEM_Code.
Counterfactually Augmented Event Matching for De-biased Temporal Sentence Grounding
[ "Xun Jiang", "Zhuoyuan Wei", "Shenshen Li", "Xing Xu", "Jingkuan Song", "Heng Tao Shen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=NTpIYCW90q
@inproceedings{ zhang2024vecaf, title={Ve{CAF}: Vision-language Collaborative Active Finetuning with Training Objective Awareness}, author={Rongyu Zhang and Zefan Cai and Huanrui Yang and Zidong Liu and Denis A Gudovskiy and Tomoyuki Okuno and Yohei Nakata and Kurt Keutzer and Baobao Chang and Yuan Du and Li Du and Shanghang Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=NTpIYCW90q} }
Finetuning a pretrained vision model (PVM) is a common technique for learning downstream vision tasks. However, the conventional finetuning process with randomly sampled data points results in diminished training efficiency. To address this drawback, we propose a novel approach, Vision-language Collaborative Active Finetuning (VeCAF). With the emerging availability of labels and natural language annotations of images through web-scale crawling or controlled generation, VeCAF makes use of these information to perform parametric data selection for PVM finetuning. VeCAF incorporates the finetuning objective to select significant data points that effectively guide the PVM towards faster convergence to meet the performance goal. This process is assisted by the inherent semantic richness of the text embedding space which we use to augment image features. Furthermore, the flexibility of text-domain augmentation allows VeCAF to handle out-of-distribution scenarios without external data. Extensive experiments show the leading performance and high computational efficiency of VeCAF that is superior to baselines in both in-distribution and out-of-distribution image classification tasks. On ImageNet, VeCAF uses up to 3.3$\times$ less training batches to reach the target performance compared to full finetuning, and achieves an accuracy improvement of 2.8\% over the state-of-the-art active finetuning method with the same number of batches.
VeCAF: Vision-language Collaborative Active Finetuning with Training Objective Awareness
[ "Rongyu Zhang", "Zefan Cai", "Huanrui Yang", "Zidong Liu", "Denis A Gudovskiy", "Tomoyuki Okuno", "Yohei Nakata", "Kurt Keutzer", "Baobao Chang", "Yuan Du", "Li Du", "Shanghang Zhang" ]
Conference
poster
2401.07853
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=NQPJYEyiiM
@inproceedings{ zheng2024nonuniform, title={Non-uniform Timestep Sampling: Towards Faster Diffusion Model Training}, author={Tianyi Zheng and Cong Geng and Peng-Tao Jiang and Ben Wan and Hao Zhang and Jinwei Chen and Jia Wang and Bo Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=NQPJYEyiiM} }
Diffusion models have garnered significant success in generative tasks, emerging as the predominant model in this domain. Despite their success, the substantial computational resources required for training diffusion models restrict their practical applications. In this paper, we resort to the optimal transport theory to accelerate the training of diffusion models, providing an in-depth analysis of the forward diffusion process. It shows that the upper bound on the Wasserstein distance of the distribution between any two timesteps in the diffusion process is an exponential decrease of the initial distance by a factor of times. This finding suggests that the state distribution of the diffusion model has a non-uniform rate of change at different points in time, thus highlighting the different importance of the diffusion timestep. To this end, we propose a novel non-uniform timestep sampling method based on the Bernoulli distribution, which favors more frequent sampling in significant timestep intervals. The key idea is to make the model focus on timesteps with larger differences, thus accelerating the training of the diffusion model. Experiments on benchmark datasets reveal that the proposed method significantly reduces the computational overhead while improving the quality of the generated images.
Non-uniform Timestep Sampling: Towards Faster Diffusion Model Training
[ "Tianyi Zheng", "Cong Geng", "Peng-Tao Jiang", "Ben Wan", "Hao Zhang", "Jinwei Chen", "Jia Wang", "Bo Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=NPWBRCwomt
@inproceedings{ chen2024stay, title={Stay Focused is All You Need for Adversarial Robustness}, author={Bingzhi Chen and Ruihan Liu and Yishu Liu and Xiaozhao Fang and Jiahui Pan and Guangming Lu and Zheng Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=NPWBRCwomt} }
Due to the inherent vulnerability of neural networks, adversarial attacks present formidable challenges to the robustness and reliability of deep learning models. In contrast to traditional adversarial training (AT) methods that prioritize semantic distillation and purification, our work pioneers a novel discovery attributing the insufficient adversarial robustness of models to the challenges of spatial attention shift and channel activation disarray. To mitigate these issues, we propose a robust spatial-aligned and channel-adapted learning paradigm, which we term the StayFocused, that integrates spatial alignment and channel adaptation to enhance the focus region against adversarial attacks by adaptively recalibrating the spatial attention and channel responses. Specifically, the proposed StayFocused mainly benefits from two flexible mechanisms, i.e., Spatial-aligned Hypersphere Constraint (SHC) and Channel-adapted Prompting Calibration (CPC). Specifically, SHC aims to enhance intra-class compactness and inter-class separation between adversarial and natural samples by measuring the angular margins and distribution distance within the hypersphere space. Inspired by the top-$K$ candidate prompts from the clean sample, CPC is designed to dynamically recalibrate channel-wise feature responses by explicitly modeling interdependencies between channels. To comprehensively learn feature representations, the StayFocused framework can be easily extended with additional branches in a multi-head training manner, further enhancing the model's robustness and adaptability. Extensive experiments on multiple benchmark datasets consistently demonstrate the effectiveness and superiority of our StayFocused over state-of-the-art baselines.
Stay Focused is All You Need for Adversarial Robustness
[ "Bingzhi Chen", "Ruihan Liu", "Yishu Liu", "Xiaozhao Fang", "Jiahui Pan", "Guangming Lu", "Zheng Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=NMMyGy1kKZ
@inproceedings{ xiao2024hivg, title={Hi{VG}: Hierarchical Multimodal Fine-grained Modulation for Visual Grounding}, author={Linhui Xiao and Xiaoshan Yang and Fang Peng and Yaowei Wang and Changsheng Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=NMMyGy1kKZ} }
Visual grounding, which aims to ground a visual region via natural language, is a task that heavily relies on cross-modal alignment. Existing works utilized uni-modal pre-trained models to transfer visual or linguistic knowledge separately while ignoring the multimodal corresponding information. Motivated by recent advancements in contrastive language-image pre-training and low-rank adaptation (LoRA) methods, we aim to solve the grounding task based on multimodal pre-training. However, there exists significant task gaps between pre-training and grounding. Therefore, to address these gaps, we propose a concise and efficient hierarchical multimodal fine-grained modulation framework, namely HiVG. Specifically, HiVG consists of a multi-layer adaptive cross-modal bridge and a hierarchical multimodal low-rank adaptation (HiLoRA) paradigm. The cross-modal bridge can address the inconsistency between visual features and those required for grounding, and establish a connection between multi-level visual and text features. HiLoRA prevents the accumulation of perceptual errors by adapting the cross-modal features from shallow to deep layers in a hierarchical manner. Experimental results on five datasets demonstrate the effectiveness of our approach and showcase the significant grounding capabilities as well as promising energy efficiency advantages. The project page: https://github.com/linhuixiao/HiVG.
HiVG: Hierarchical Multimodal Fine-grained Modulation for Visual Grounding
[ "Linhui Xiao", "Xiaoshan Yang", "Fang Peng", "Yaowei Wang", "Changsheng Xu" ]
Conference
poster
2404.13400
[ "https://github.com/linhuixiao/hivg" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=NLB1zvVf2I
@inproceedings{ miaoxin2024ganbased, title={{GAN}-based Symmetric Embedding Costs Adjustment for Enhancing Image Steganographic Security}, author={Ye Miaoxin and Zhou Saixing and Weiqi Luo and Shunquan Tan and Jiwu Huang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=NLB1zvVf2I} }
Designing embedding costs is pivotal in modern image steganography. Many studies have shown adjusting symmetric embedding costs to asymmetric ones can enhance steganographic security. However, most existing methods heavily depend on manually defined parameters or rules, limiting security performance improvements. To overcome this limitation, we introduce an advanced GAN-based framework that transitions symmetric costs to asymmetric ones without the need for the manual intervention seen in existing approaches, such as the detailed specification of cost modulation directions and magnitudes. In our framework, we firstly achieve symmetric costs for a cover image, which is randomly split into two sub-images, with part of the secret information embedded into one. Subsequently, we design a GAN model to adjust the embedding costs of the second sub-image to asymmetric, facilitating the secure embedding of the remaining secret information. To support our phased embedding approach, our GAN's discriminator incorporates two steganalyers with different tasks: distinguishing the generator's final output, i.e., the stego image, from both the input cover image and the partially embedded stego image, providing diverse guidance to the generator. In addition, we introduce a simple yet effective update strategy to ensure a stable training process. Comprehensive experiments demonstrate that our method significantly enhances security over existing symmetric steganography techniques, achieving state-of-the-art levels compared to other methods focused on embedding costs adjustments. Additionally, detailed ablation studies validate our approach's effectiveness.
GAN-based Symmetric Embedding Costs Adjustment for Enhancing Image Steganographic Security
[ "Ye Miaoxin", "Zhou Saixing", "Weiqi Luo", "Shunquan Tan", "Jiwu Huang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=NDYDDxAS9Z
@inproceedings{ fan2024detached, title={Detached and Interactive Multimodal Learning}, author={Yunfeng FAN and Wenchao Xu and Haozhao Wang and Junhong Liu and Song Guo}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=NDYDDxAS9Z} }
Recently, Multimodal Learning (MML) has gained significant interest as it compensates for single-modality limitations through comprehensive complementary information within multimodal data. However, traditional MML methods generally use the joint learning framework with a uniform learning objective that can lead to the modality competition issue, where feedback predominantly comes from certain modalities, limiting the full potential of others. In response to this challenge, this paper introduces DI-MML, a novel detached MML framework designed to learn complementary information across modalities under the premise of avoiding modality competition. Specifically, DI-MML addresses competition by separately training each modality encoder with isolated learning objectives. It further encourages cross-modal interaction via a shared classifier that defines a common feature space and employing a dimension-decoupled unidirectional contrastive (DUC) loss to facilitate modality-level knowledge transfer. Additionally, to account for varying reliability in sample pairs, we devise a certainty-aware logit weighting strategy to effectively leverage complementary information at the instance level during inference. Extensive experiments conducted on audio-visual, flow-image, and front-rear view datasets show the superior performance of our proposed method.
Detached and Interactive Multimodal Learning
[ "Yunfeng FAN", "Wenchao Xu", "Haozhao Wang", "Junhong Liu", "Song Guo" ]
Conference
poster
2407.19514
[ "https://github.com/fanyunfeng-bit/di-mml" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=NCzKv7jZ2y
@inproceedings{ zhou2024timeline, title={Timeline and Boundary Guided Diffusion Network for Video Shadow Detection}, author={Haipeng Zhou and Hongqiu Wang and Tian Ye and Zhaohu Xing and Jun Ma and Ping Li and Qiong Wang and Lei Zhu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=NCzKv7jZ2y} }
Video Shadow Detection (VSD) aims to detect the shadow masks with frame sequence. Existing works suffer from inefficient temporal learning. Moreover, few works address the VSD problem by considering the characteristic (i.e., boundary) of shadow. Motivated by this, we propose a Timeline and Boundary Guided Diffusion (TBGDiff) network for VSD where we take account of the past-future temporal guidance and boundary information jointly. In detail, we design a Dual Scale Aggregation (DSA) module for better temporal understanding by rethinking the affinity of the long-term and short-term frames for the clipped video. Next, we introduce Shadow Boundary Aware Attention (SBAA) to utilize the edge contexts for capturing the characteristics of shadows. Moreover, we are the first to introduce the Diffusion model for VSD in which we explore a Space-Time Encoded Embedding (STEE) to inject the temporal guidance for Diffusion to conduct shadow detection. Benefiting from these designs, our model can not only capture the temporal information but also the shadow property. Extensive experiments show that the performance of our approach overtakes the state-of-the-art methods, verifying the effectiveness of our components. We release the codes at https://github.com/haipengzhou856/TBGDiff.
Timeline and Boundary Guided Diffusion Network for Video Shadow Detection
[ "Haipeng Zhou", "Hongqiu Wang", "Tian Ye", "Zhaohu Xing", "Jun Ma", "Ping Li", "Qiong Wang", "Lei Zhu" ]
Conference
oral
2408.11785
[ "https://github.com/haipengzhou856/tbgdiff" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=N7KGLxoKcC
@inproceedings{ zeng2024mitigating, title={Mitigating World Biases: A Multimodal Multi-View Debiasing Framework for Fake News Video Detection}, author={Zhi Zeng and Minnan Luo and Xiangzheng Kong and Huan Liu and Hao Guo and Hao Yang and Zihan Ma and Xiang Zhao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=N7KGLxoKcC} }
Short videos turn into an important channel for public sharing, as well as they've become a fertile ground for fake news. Fake news video detection is to judge the veracity of news based on its different modal information, such as video, audio, text, image and social context information. Current detection models tend to learn the multimodal dataset biases within spurious correlations between news modalities and veracity labels as shortcuts, rather than learning how to integrate the multimodal information behind them to reason, resulting in seriously degrading their detection and generalization capabilities. To address this issues, we propose a Multimodal Multi-View Debiasing (MMVD) framework, which makes the first attempt to mitigate various multimodal biases for fake news video detection. Inspired by people's misleading situations by multimodal short videos, we summarize three cognitive biases: static, dynamic, and social biases. MMVD put forward a multi-view causal reasoning strategy to learn unbiased dependencies within the cognitive biases, thus enhancing the unbiased prediction of multimodal videos. The extensive experimental results show that the MMVD could improve the detection performance of multimodal fake news video. Studies also confirm that our MMVD can mitigate multiple biases on complex real-world scenarios and improve generalization ability of multimodal models.
Mitigating World Biases: A Multimodal Multi-View Debiasing Framework for Fake News Video Detection
[ "Zhi Zeng", "Minnan Luo", "Xiangzheng Kong", "Huan Liu", "Hao Guo", "Hao Yang", "Zihan Ma", "Xiang Zhao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=N7G8Kuykk5
@inproceedings{ liu2024optical, title={Optical Flow-Guided 6DoF Object Pose Tracking with an Event Camera}, author={Zibin Liu and Banglei Guan and Yang Shang and Shunkun Liang and Zhenbao Yu and Qifeng Yu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=N7G8Kuykk5} }
Object pose tracking is one of the pivotal technologies in multimedia, attracting ever-growing attention in recent years. Existing methods employing traditional cameras encounter numerous challenges such as motion blur, sensor noise, partial occlusion, and changing lighting conditions. The emerging bio-inspired sensors, particularly event cameras, possess advantages such as high dynamic range and low latency, which hold the potential to address the aforementioned challenges. In this work, we present an optical flow-guided 6DoF object pose tracking method with an event camera. A 2D-3D hybrid feature extraction strategy is firstly utilized to detect corners and edges from events and object models, which characterizes object motion precisely. Then, we search for the optical flow of corners by maximizing the event-associated probability within a spatio-temporal window, and establish the correlation between corners and edges guided by optical flow. Furthermore, by minimizing the distances between corners and edges, the 6DoF object pose is iteratively optimized to achieve continuous pose tracking. Experimental results of both simulated and real events demonstrate that our methods outperform event-based state-of-the-art methods in terms of both accuracy and robustness.
Optical Flow-Guided 6DoF Object Pose Tracking with an Event Camera
[ "Zibin Liu", "Banglei Guan", "Yang Shang", "Shunkun Liang", "Zhenbao Yu", "Qifeng Yu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=N7AdbHQFwA
@inproceedings{ zhang2024scalable, title={Scalable Multi-view Unsupervised Feature Selection with Structure Learning and Fusion}, author={Chenglong Zhang and Xinyan Liang and Peng Zhou and Zhaolong Ling and Yingwei Zhang and Xingyu Wu and Weiguo Sheng and Bingbing Jiang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=N7AdbHQFwA} }
To tackle the high-dimensional data with multiple representations, multi-view unsupervised feature selection has emerged as a significant learning paradigm. However, previous methods suffer from the following dilemmas: (i) The emphasis is on selecting features to preserve the similarity structure of data, while neglecting the discriminative information in the cluster structure; (ii) They often impose the orthogonal constraint on the pseudo cluster labels, disrupting the locality in the cluster label space; (iii) Learning the similarity or cluster structure from all samples is likewise time-consuming. To this end, a Scalable Multi-view Unsupervised Feature Selection with structure learning and fusion (SMUFS) is proposed to jointly exploit the cluster structure and the similarity relations of data. Specifically, SMUFS introduces the sample-view weights to adaptively fuse the membership matrices that indicate cluster structures and serve as the pseudo cluster labels, such that a unified membership matrix across views can be effectively obtained to guide feature selection. Meanwhile, SMUFS performs graph learning from the membership matrix, preserving the locality of cluster labels and improving their discriminative capability. Further, an acceleration strategy has been developed to make SMUFS scalable for relatively large-scale data. A convergent solution is devised to optimize the formulated problem, and extensive experiments demonstrate the effectiveness and superiority of SMUFS.
Scalable Multi-view Unsupervised Feature Selection with Structure Learning and Fusion
[ "Chenglong Zhang", "Xinyan Liang", "Peng Zhou", "Zhaolong Ling", "Yingwei Zhang", "Xingyu Wu", "Weiguo Sheng", "Bingbing Jiang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=N6hFKeLOfu
@inproceedings{ li2024goal, title={{GOAL}: Grounded text-to-image Synthesis with Joint Layout Alignment Tuning}, author={Yaqi Li and Han Fang and Zerun Feng and Kaijing Ma and Chao Ban and Xianghao Zang and LanXiang Zhou and Zhongjiang He and Jingyan Chen and Jiani Hu and Hao Sun and Huayu Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=N6hFKeLOfu} }
Recent text-to-image (T2I) synthesis models have demonstrated intriguing abilities to produce high-quality images based on text prompts. However, current models still face Text-Image Misalignment problem (e.g., attribute errors and relation mistakes) for compositional generation. Existing models attempted to condition T2I models on grounding inputs to improve controllability while ignoring the explicit supervision from the layout conditions. To tackle this issue, we propose Grounded jOint lAyout aLignment (GOAL), an effective framework for T2I synthesis. Two novel modules, Discriminative semantic alignment (DSAlign) and masked attention alignment (MAAlign), are proposed and incorporated in this framework to improve the text-image alignment. DSAlign leverages discriminative tasks at the region-wise level to ensure low-level semantic alignment. MAAlign provides high-level attention alignment by guiding the model to focus on the target object. We also build a dataset GOAL2K for model fine-tuning, which composes 2000 semantically accurate image-text pairs and their layout annotations. Comprehensive evaluations on T2I-Compbench, NSR-1K, and Drawbench demonstrate the superior generation performance of our method. Especially, there are improvements of 19%, 13%, and 12% in color, shape, and texture metrics for T2I-Compbench. Additionally, Q-Align metrics demonstrate that our method can generate images of higher quality.
GOAL: Grounded text-to-image Synthesis with Joint Layout Alignment Tuning
[ "Yaqi Li", "Han Fang", "Zerun Feng", "Kaijing Ma", "Chao Ban", "Xianghao Zang", "LanXiang Zhou", "Zhongjiang He", "Jingyan Chen", "Jiani Hu", "Hao Sun", "Huayu Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=N670zjFHLz
@inproceedings{ chen2024uncovering, title={Uncovering Capabilities of Model Pruning in Graph Contrastive Learning}, author={Xueyuan Chen and Shangzhe Li and Junran Wu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=N670zjFHLz} }
Graph contrastive learning has achieved great success in pre-training graph neural networks without ground-truth labels. Leading graph contrastive learning follows the classical scheme of contrastive learning, forcing model to identify the essential information from augmented views. However, general augmented views are produced via random corruption or learning, which inevitably leads to semantics alteration. Although domain knowledge guided augmentations alleviate this issue, the generated views are domain specific and undermine the generalization. In this work, motivated by the firm representation ability of sparse model from pruning, we reformulate the problem of graph contrastive learning via contrasting different model versions rather than augmented views. We first theoretically reveal the superiority of model pruning in contrast to data augmentations. In practice, we take original graph as input and dynamically generate a perturbed graph encoder to contrast with the original encoder by pruning its transformation weights. Furthermore, considering the integrity of node embedding in our method, we are capable of developing a local contrastive loss to tackle the hard negative samples that disturb the model training. We extensively validate our method on various benchmarks regarding graph classification via unsupervised and transfer learning. Compared to the state-of-the-art (SOTA) works, better performance can always be obtained by the proposed method.
Uncovering Capabilities of Model Pruning in Graph Contrastive Learning
[ "Xueyuan Chen", "Shangzhe Li", "Junran Wu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=N3yngE4fTy
@inproceedings{ yang2024introducing, title={Introducing Common Null Space of Gradients for Gradient Projection Methods in Continual Learning}, author={Chengyi Yang and Mingda Dong and Xiaoyue Zhang and Jiayin Qi and Aimin Zhou}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=N3yngE4fTy} }
Continual learning aims to learn new knowledge from a sequence of tasks without forgetting. Recent studies have found that projecting gradients onto the orthogonal direction of task-specific features is effective. However, these methods mainly focus on mitigating catastrophic forgetting by adopting old features to construct projection spaces, neglecting the potential to enhance plasticity and the valuable information contained in previous gradients. To enhance plasticity and effectively utilize the gradients from old tasks, we propose Gradient Projection in Common Null Space (GPCNS), which projects current gradients into the common null space of final gradients under all preceding tasks. Moreover, to integrate both feature and gradient information, we propose a collaborative framework that allows GPCNS to be utilized in conjunction with existing gradient projection methods as a plug-and-play extension that provides gradient information and better plasticity. Experimental evaluations conducted on three benchmarks demonstrate that GPCNS exhibits superior plasticity compared to conventional gradient projection methods. More importantly, GPCNS can effectively improve the backward transfer and average accuracy for existing gradient projection methods when applied as a plugin, which outperforms all the gradient projection methods without increasing learnable parameters and customized objective functions. The code is available at https://github.com/Hifipsysta/GPCNS.
Introducing Common Null Space of Gradients for Gradient Projection Methods in Continual Learning
[ "Chengyi Yang", "Mingda Dong", "Xiaoyue Zhang", "Jiayin Qi", "Aimin Zhou" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=N3yCbrRbKU
@inproceedings{ zareapoor2024fractional, title={Fractional Correspondence Framework in Detection Transformer}, author={Masoumeh Zareapoor and Pourya Shamsolmoali and Huiyu Zhou and Yue Lu and Salvador Garcia}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=N3yCbrRbKU} }
The Detection Transformer (DETR), by incorporating the Hungarian algorithm, has significantly simplified the matching process in object detection tasks. This algorithm facilitates optimal one-to-one matching of predicted bounding boxes to ground-truth annotations during training. While effective, this strict matching process does not inherently account for the varying densities and distributions of objects, leading to suboptimal correspondences such as failing to handle multiple detections of the same object or missing small objects. To address this, we propose the Regularized Transport Plan (RTP). RTP introduces a flexible matching strategy that captures the cost of aligning predictions with ground truths to find the most accurate correspondences between these sets. By utilizing the differentiable Sinkhorn algorithm, RTP allows for soft, fractional matching rather than strict one-to-one assignments. This approach enhances the model's capability to manage varying object densities and distributions effectively. Our extensive evaluations on the MS-COCO and VOC benchmarks demonstrate the effectiveness of our approach. RTP-DETR, surpassing the performance of the Deform-DETR and the recently introduced DINO-DETR, achieving absolute gains in mAP of {\bf{+3.8\%}} and {\bf{+1.7\%}}, respectively.
Fractional Correspondence Framework in Detection Transformer
[ "Masoumeh Zareapoor", "Pourya Shamsolmoali", "Huiyu Zhou", "Yue Lu", "Salvador Garcia" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=N0ZsDei5PV
@inproceedings{ lim2024probabilistic, title={Probabilistic Vision-Language Representation for Weakly Supervised Temporal Action Localization}, author={GeunTaek Lim and Hyunwoo Kim and Joonsoo Kim and Yukyung Choi}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=N0ZsDei5PV} }
Weakly supervised temporal action localization (WTAL) aims to detect action instances in untrimmed videos with only video-level annotations. As many existing works optimize WTAL models based on action classification labels, they encounter the task discrepancy problem (i.e., localization-by-classification). To tackle this issue, recent studies have attempted to utilize action category names as auxiliary semantic knowledge with vision-language pre-training (VLP). However, there are still areas where existing research falls short. Previous approaches primarily focused on leveraging textual information from language models but overlooked the alignment of dynamic human action and VLP knowledge in joint space. Furthermore, the deterministic representation employed in previous studies struggles to capture fine-grained human motion. To address these problems, we propose a novel framework that aligns human action knowledge and VLP knowledge in the probabilistic embedding space. Moreover, we propose intra- and inter-distribution contrastive learning to enhance the probabilistic embedding space based on statistical similarities. Extensive experiments and ablation studies reveal that our method significantly outperforms all previous state-of-the-art methods. Our code will be available after publication.
Probabilistic Vision-Language Representation for Weakly Supervised Temporal Action Localization
[ "GeunTaek Lim", "Hyunwoo Kim", "Joonsoo Kim", "Yukyung Choi" ]
Conference
poster
2408.05955
[ "https://github.com/sejong-rcv/pvlr" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=N0LRisF0ZU
@inproceedings{ wei2024hearing, title={Hearing the Moment with MetaEcho! From Physical to Virtual in Synchronized Sound Recording}, author={Zheng WEI and Yuzheng Chen and Wai Tong and Xuan Zong and Huamin Qu and Xian Xu and LIK-HANG LEE}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=N0LRisF0ZU} }
In film education, high expenses and limited space significantly challenge teaching synchronized sound recording (SSR). Traditional methods, which emphasize theory with limited practical experience, often fail to bridge the gap between theoretical understanding and practical application. As such, we introduce MetaEcho, an educational virtual reality leveraging the presence theory for teaching SSR. MetaEcho provides realistic simulations of various recording equipment and facilitates communication between learners and instructors, offering an immersive learning experience that closely mirrors actual practices. An evaluation with 24 students demonstrated that MetaEcho surpasses the traditional method in presence, collaboration, usability, realism, comprehensibility, and creativity. Three experts also commented on the benefits of MetaEcho and the opportunities for promoting SSR education in the metaverse era.
Hearing the Moment with MetaEcho! From Physical to Virtual in Synchronized Sound Recording
[ "Zheng WEI", "Yuzheng Chen", "Wai Tong", "Xuan Zong", "Huamin Qu", "Xian Xu", "LIK-HANG LEE" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=MvkLwdeW5q
@inproceedings{ yang2024graphlearner, title={GraphLearner: Graph Node Clustering with Fully Learnable Augmentation}, author={Xihong Yang and Erxue Min and KE LIANG and Yue Liu and Siwei Wang and sihang zhou and Huijun Wu and Xinwang Liu and En Zhu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=MvkLwdeW5q} }
Contrastive deep graph clustering (CDGC) leverages the power of contrastive learning to group nodes into different clusters. The quality of contrastive samples is crucial for achieving better performance, making augmentation techniques a key factor in the process. However, the augmentation samples in existing methods are always predefined by human experiences, and agnostic from the downstream task clustering, thus leading to high human resource costs and poor performance. To overcome these limitations, we propose a Graph Node Clustering with Fully Learnable Augmentation, termed GraphLearner. It introduces learnable augmentors to generate high-quality and task-specific augmented samples for CDGC. GraphLearner incorporates two learnable augmentors specifically designed for capturing attribute and structural information. Moreover, we introduce two refinement matrices, including the high-confidence pseudo-label matrix and the cross-view sample similarity matrix, to enhance the reliability of the learned affinity matrix. During the training procedure, we notice the distinct optimization goals for training learnable augmentors and contrastive learning networks. In other words, we should both guarantee the consistency of the embeddings as well as the diversity of the augmented samples. To address this challenge, we propose an adversarial learning mechanism within our method. Besides, we leverage a two-stage training strategy to refine the high-confidence matrices. Extensive experimental results on six benchmark datasets validate the effectiveness of GraphLearner.
GraphLearner: Graph Node Clustering with Fully Learnable Augmentation
[ "Xihong Yang", "Erxue Min", "KE LIANG", "Yue Liu", "Siwei Wang", "sihang zhou", "Huijun Wu", "Xinwang Liu", "En Zhu" ]
Conference
poster
2212.03559
[ "https://github.com/xihongyang1999/graphlearner" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=MrAsNqfMYY
@inproceedings{ wang2024languagedriven, title={Language-Driven Interactive Shadow Detection}, author={Hongqiu Wang and Wei Wang and Haipeng Zhou and Huihui Xu and Shaozhi Wu and Lei Zhu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=MrAsNqfMYY} }
Traditional shadow detectors often identify all shadow regions of static images or video sequences. This work presents the Referring Video Shadow Detection (RVSD), which is an innovative task that rejuvenates the classic paradigm by facilitating the segmentation of particular shadows in videos based on descriptive natural language prompts. This novel RVSD not only achieves segmentation of arbitrary shadow areas of interest based on descriptions (flexibility) but also allows users to interact with visual content more directly and naturally by using natural language prompts (interactivity), paving the way for abundant applications ranging from advanced video editing to virtual reality experiences. To pioneer the RVSD research, we curated a well-annotated RVSD dataset, which encompasses 86 videos and a rich set of 15,011 paired textual descriptions with corresponding shadows. To the best of our knowledge, this dataset is the first one for addressing RVSD. Based on this dataset, we propose a Referring Shadow-Track Memory Network (RSM-Net) for addressing the RVSD task. In our RSM-Net, we devise a Twin-Track Synergistic Memory (TSM) to store intra-clip memory features and hierarchical inter-clip memory features, and then pass these memory features into a memory read module to refine features of the current video frame for referring shadow detection. We also develop a Mixed-Prior Shadow Attention (MSA) to utilize physical priors to obtain a coarse shadow map for learning more visual features by weighting it with the input video frame. Experimental results show that our RSM-Net achieves state-of-the-art performance for RVSD with a notable Overall IOU increase of 4.4%. We shall release our code and dataset for future research.
Language-Driven Interactive Shadow Detection
[ "Hongqiu Wang", "Wei Wang", "Haipeng Zhou", "Huihui Xu", "Shaozhi Wu", "Lei Zhu" ]
Conference
poster
2408.08543
[ "https://github.com/whq-xxh/rvsd" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=MqRUNCoeic
@inproceedings{ wang2024gslamot, title={{GSLAMOT}: A Tracklet and Query Graph-based Simultaneous Locating, Mapping, and Multiple Object Tracking System}, author={Shuo Wang and Yongcai Wang and Zhimin Xu and Yongyu Guo and Wanting Li and Zhe Huang and xuewei Bai and Deying Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=MqRUNCoeic} }
For interacting with mobile objects in unfamiliar environments, simultaneously locating, mapping, and tracking the 3D poses of multiple objects are crucially required. This paper proposes a Tracklet and Query Graph based framework, i.e., GSLAMOT to address this challenge. GSLAMOT represents the dynamic scene by a combination of semantic map, agent trajectory, and an online maintained Tracklet Graph (TG). TG tracks and predicts the 3D poses of the detected active objects. A Query Graph (QG) is constructed in each frame by object detection to query and to update TG, as well as the semantic map and the agent trajectory. For accurate object association, a Multi-criteria Subgraph Similarity Association (MSSA) method is proposed to find matched objects between the detections in QG and the predicted tracklets in TG. Then an Object-centric Graph Optimization (OGO) method is proposed to optimize the TG, the semantic map, and the agent trajectory simultaneously. It triangulates the detected objects into the map to enrich the map's semantic information. We address the efficiency issues to handle the three tightly coupled tasks in parallel. Experiments are conducted on KITTI, Waymo, and an emulated Traffic Congestion dataset that highlights challenging scenarios including congested objects. Experiments show that GSLAMOT enables accurately crowded object tracking while conducting SLAM accurately in challenging scenarios, demonstrating more excellent performances than the state-of-the-art methods.
GSLAMOT: A Tracklet and Query Graph-based Simultaneous Locating, Mapping, and Multiple Object Tracking System
[ "Shuo Wang", "Yongcai Wang", "Zhimin Xu", "Yongyu Guo", "Wanting Li", "Zhe Huang", "xuewei Bai", "Deying Li" ]
Conference
poster
2408.09191
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Mj1yycahqi
@inproceedings{ cai2024towards, title={Towards Effective Federated Graph Anomaly Detection via Self-boosted Knowledge Distillation}, author={Jinyu Cai and Yunhe Zhang and Zhoumin Lu and Wenzhong Guo and See-Kiong Ng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Mj1yycahqi} }
Graph anomaly detection (GAD) aims to identify anomalous graphs that significantly deviate from other ones, which has raised growing attention due to the broad existence and complexity of graph-structured data in many real-world scenarios. However, existing GAD methods usually execute with centralized training, which may lead to privacy leakage risk in some sensitive cases, thereby impeding collaboration among organizations seeking to collectively develop robust GAD models. Although federated learning offers a promising solution, the prevalent non-IID problems and high communication costs present significant challenges, particularly pronounced in collaborations with graph data distributed among different participants. To tackle these challenges, we propose an effective federated graph anomaly detection framework (FGAD). We first introduce an anomaly generator to perturb the normal graphs to be anomalous and train a powerful anomaly detector by distinguishing generated anomalous graphs from normal ones. We subsequently leverage a student model to distill knowledge from the trained anomaly detector (teacher model), which aims to maintain the personality of local models and alleviate the adverse impact of non-IID problems. Additionally, we design an effective collaborative learning mechanism that facilitates the personalization preservation of local models and significantly reduces communication costs among clients. Empirical results of diverse GAD tasks demonstrate the superiority and efficiency of FGAD.
Towards Effective Federated Graph Anomaly Detection via Self-boosted Knowledge Distillation
[ "Jinyu Cai", "Yunhe Zhang", "Zhoumin Lu", "Wenzhong Guo", "See-Kiong Ng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Mfs9CkNoe4
@inproceedings{ wang2024perceplie, title={Percep{LIE}: A New Path to Perceptual Low-Light Image Enhancement}, author={Cong Wang and Chengjin Yu and Jie Mu and Wei Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Mfs9CkNoe4} }
While current CNN-based low-light image enhancement (LIE) approaches have achieved significant progress, they often fail to generate better perceptual quality which requires restoring better details and more natural colors. To address these problems, we set a new path, called PercepLIE, by presenting the VQGAN with Multi-luminance Detail Compensation (MDC) and Global Color Adjustment (GCA). Specifically, observed that latent light features of the low-light images are quite different from those captured in normal light, we utilize VQGAN to explore the latent light representation of normal-light images to help the estimation of the low-light and normal-light mapping. Furthermore, we employ Gamma correction with varying Gamma values on the gradient to create multi-luminance details, forming the basis for our MDC module to facilitate better detail estimation. To optimize the colors of low-light input images, we introduce a simple yet effective GCA module that is based on spatially-varying representation between the estimated normal-light images in this module and low-light inputs. By combining the VQGAN with MDC and GCA within a stage-wise training mechanism, our method generates images with finer details and natural colors and achieves favorable performance on both synthetic and real-world datasets in terms of perceptual quality metrics including NIQE, PI, and LPIPS. The source codes will be made available at \url{https://github.com/supersupercong/PercepLIE}.
PercepLIE: A New Path to Perceptual Low-Light Image Enhancement
[ "Cong Wang", "Chengjin Yu", "Jie Mu", "Wei Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Mf00X7fShq
@inproceedings{ xin2024advancing, title={Advancing Quantization Steps Estimation : A Two-Stream Network Approach for Enhancing Robustness}, author={Cheng Xin and Hao Wang and Jinwei Wang and Xiangyang Luo and Bin Ma}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Mf00X7fShq} }
In Joint Photographic Experts Group (JPEG) image steganalysis and forensics, the quantization step can reveal the history of image operations. Several methods for estimating the quantization step have been proposed by researchers. However, existing algorithms fail to account for robustness, which limits the application of these algorithms. To solve the above problems, we propose a two-stream network structure based on Swin Transformer. The spatial domain features of JPEG images exhibit strong robustness but low accuracy. Conversely, frequency domain features demonstrate high accuracy but weak robustness. Therefore, we design a two-stream network with the multi-scale feature of Swin Transformer to extract spatial domain features with high robustness and frequency domain features with high accuracy, respectively. Furthermore, to adaptively fuse features in both the frequency domain and spatial domain, we design a Spatial-frequency Information Dynamic Fusion (SIDF) module to dynamically allocate weights. Finally, we modify the network from a regression model to a classification model to speed up convergence and improve the accuracy of the algorithm. The experimental results show that the accuracy of the proposed method is higher than 98% on clean images. Meanwhile, in robust environments, the algorithm proposed maintains an average accuracy of over 81%.
Advancing Quantization Steps Estimation : A Two-Stream Network Approach for Enhancing Robustness
[ "Cheng Xin", "Hao Wang", "Jinwei Wang", "Xiangyang Luo", "Bin Ma" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=McL5I0oSVg
@inproceedings{ huo2024monocular, title={Monocular Human-Object Reconstruction in the Wild}, author={Chaofan Huo and Ye Shi and Jingya Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=McL5I0oSVg} }
Learning the prior knowledge of the 3D human-object spatial relation is crucial for reconstructing human-object interaction from images and understanding how humans interact with objects in 3D space. Previous works learn this prior from datasets collected in controlled environments, but due to the diversity of domains, they struggle to generalize to real-world scenarios. To overcome this limitation, we present a 2D-supervised method that learns the 3D human-object spatial relation prior purely from 2D images in the wild. Our method utilizes a flow-based neural network to learn the prior distribution of the 2D human-object keypoint layout and viewports for each image in the dataset. The effectiveness of the prior learned from 2D images is demonstrated on the human-object reconstruction task by applying the prior to tune the relative pose between the human and the object during the post-optimization stage. To validate and benchmark our method on in-the-wild images, we collect the WildHOI dataset from the YouTube website, which consists of various interactions with 8 objects in real-world scenarios. We conduct the experiments on the indoor BEHAVE dataset and the outdoor WildHOI dataset. The results show that our method achieves almost comparable performance with fully 3D supervised methods on the BEHAVE dataset, even if we have only utilized the 2D layout information, and outperforms previous methods in terms of generality and interaction diversity on in-the-wild images.
Monocular Human-Object Reconstruction in the Wild
[ "Chaofan Huo", "Ye Shi", "Jingya Wang" ]
Conference
poster
2407.20566
[ "https://github.com/huochf/WildHOI" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Mc3G3ciRSG
@inproceedings{ gao2024starvp, title={{STAR}-{VP}: Improving Long-term Viewport Prediction in 360{\textdegree} Videos via Space-aligned and Time-varying Fusion}, author={Baoqi Gao and Daoxu Sheng and Lei Zhang and Qi Qi and Bo He and Zirui Zhuang and Jingyu Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Mc3G3ciRSG} }
Accurate long-term viewport prediction in tile-based 360° video adaptive streaming helps pre-download tiles for a further future, thus establishing a longer buffer to cope with network fluctuations. Long-term viewport motion is mainly influenced by Historical viewpoint Trajectory (HT) and Video Content information (VC). However, HT and VC are difficult to align in space due to their different modalities, and their relative importance in viewport prediction varies across prediction time steps. In this paper, we propose STAR-VP, a model that fuses HT and VC in a Space-aligned and Time-vARying manner for Viewport Prediction. Specifically, we first propose a novel saliency representation $salxyz$ and a Spatial Attention Module to solve the spatial alignment of HT and VC. Then, we propose a two-stage fusion approach based on Transformer and gating mechanisms to capture their time-varying importance. Visualization of attention scores intuitively demonstrates STAR-VP's capability in space-aligned and time-varying fusion. Evaluation on three public datasets shows that STAR-VP achieves state-of-the-art accuracy for long-term (2-5s) viewport prediction without sacrificing short-term ($<$1s) prediction performance.
STAR-VP: Improving Long-term Viewport Prediction in 360° Videos via Space-aligned and Time-varying Fusion
[ "Baoqi Gao", "Daoxu Sheng", "Lei Zhang", "Qi Qi", "Bo He", "Zirui Zhuang", "Jingyu Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=MZQYGtOOKU
@inproceedings{ ji2024towards, title={Towards Flexible Evaluation for Generative Visual Question Answering}, author={Huishan Ji and Qingyi Si and Zheng Lin and Weiping Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=MZQYGtOOKU} }
Throughout rapid development of multimodal large language models, a crucial ingredient is a fair and accurate evaluation of their multimodal comprehension abilities. Although Visual Question Answering (VQA) could serve as a developed test field, limitations of VQA evaluation, like the inflexible pattern of Exact Match, have hindered MLLMs from demonstrating their real capability and discourage rich responses. Therefore, this paper proposes to use semantics-based evaluators for assessing unconstrained open-ended responses on VQA datasets. As characteristics of VQA have made such evaluation significantly different than the traditional Semantic Textual Similarity (STS) task, to systematically analyze the behaviour and compare the performance of various evaluators including LLM-based ones, we proposes three key properties, i.e., Alignment, Consistency and Generalization, and a corresponding dataset Assessing VQA Evaluators (AVE) to facilitate analysis. In addition, this paper proposes a Semantically Flexible VQA Evaluator (SFVE) with meticulous design based on the unique features of the VQA response evaluation task. Experimental results verify the feasibility of model-based VQA evaluation and effectiveness of the proposed evaluator that surpasses existing semantic evaluators by a large margin. The proposed training scheme generalizes to both the BERT-like encoders and decoder-only LLM.
Towards Flexible Evaluation for Generative Visual Question Answering
[ "Huishan Ji", "Qingyi Si", "Zheng Lin", "Weiping Wang" ]
Conference
oral
2408.00300
[ "https://github.com/jihuishan/flexible_evaluation_for_vqa_mm24" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=MReRoSm2rH
@inproceedings{ wei2024dopra, title={{DOPRA}: Decoding Over-accumulation Penalization and Re-allocation in Specific Weighting Layer}, author={Jinfeng Wei and Xiao Feng Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=MReRoSm2rH} }
In this work, we introduce DOPRA, a novel approach designed to mitigate hallucinations in multi-modal large language models (MLLMs). Unlike existing solutions that typically involve costly supplementary training data or the integration of external knowledge sources, DOPRA innovatively addresses hallucinations by decoding-specific weighted layer penalties and redistribution, offering an economical and effective solution without the need for additional resources. DOPRA is grounded in unique insights into the intrinsic mechanisms controlling hallucinations within MLLMs, especially the models' tendency to over-rely on a subset of summary tokens in the self-attention matrix, neglecting critical image-related information. This phenomenon is particularly pronounced in certain strata. To counteract this over-reliance, DOPRA employs a strategy of weighted overlay penalties and redistribution in specific layers, such as the 12th layer, during the decoding process. Furthermore, DOPRA includes a retrospective allocation process that re-examines the sequence of generated tokens, allowing the algorithm to reallocate token selection to better align with the actual image content, thereby reducing the incidence of hallucinatory descriptions in auto-generated captions. Overall, DOPRA represents a significant step forward in improving the output quality of MLLMs by systematically reducing hallucinations through targeted adjustments during the decoding process.
DOPRA: Decoding Over-accumulation Penalization and Re-allocation in Specific Weighting Layer
[ "Jinfeng Wei", "Xiao Feng Zhang" ]
Conference
poster
2407.15130
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=MQV8yTL4s9
@inproceedings{ liu2024building, title={Building Trust in Decision with Conformalized Multi-view Deep Classification}, author={Wei Liu and Yufei Chen and Xiaodong Yue}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=MQV8yTL4s9} }
Uncertainty-aware multi-view deep classification methods have markedly improved the reliability of results amidst the challenges posed by noisy multi-view data, primarily by quantifying the uncertainty of predictions. Despite their efficacy, these methods encounter limitations in real-world applications: 1) They are limited to providing a single class prediction per instance, which can lead to inaccuracies when dealing with samples that are difficult to classify due to inconsistencies across multiple views. 2) While these methods offer a quantification of prediction uncertainty, the magnitude of such uncertainty often varies with different datasets, leading to confusion among decision-makers due to the lack of a standardized measure for uncertainty intensity. To address these issues, we introduce Conformalized Multi-view Deep Classification (CMDC), a novel method that generates set-valued rather than single-valued predictions and integrates uncertain predictions as an explicit class category. Through end-to-end training, CMDC minimizes the size of prediction sets while guaranteeing that the set-valued predictions contain the true label with a user-defined probability, building trust in decision-making. The superiority of CMDC is validated through comprehensive theoretical analysis and empirical experiments on various multi-view datasets.
Building Trust in Decision with Conformalized Multi-view Deep Classification
[ "Wei Liu", "Yufei Chen", "Xiaodong Yue" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=MOKqzGdjmF
@inproceedings{ qi2024visual, title={Visual Question Answering Driven Eye Tracking Paradigm for Identifying Children with Autism Spectrum Disorder}, author={Jiansong Qi and Yaping Huang and Ying Zhang and Sihui Zhang and Mei Tian and Yi Tian and Fanchao Meng and Lin Guan and Tianyi Chang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=MOKqzGdjmF} }
As a non-contact method, eye-tracking data can be used to diagnose people with Autism Spectrum Disorder (ASD) by comparing the differences of eye movements between ASD and healthy people. However, existing works mainly employ a simple free-viewing paradigm or visual search paradigm with restricted or unnatural stimuli to collect the gaze patterns of adults or children with an average age of 6-to-8 years, hindering the early diagnosis and intervention of preschool children with ASD. In this paper, we propose a novel method for identifying children with ASD in three unique features: First, we design a novel eye-tracking paradigm that records Visual Question Answering (VQA) driven gaze patterns in complex natural scenes as a powerful guide for differentiating children with ASD. Second, we contribute a carefully designed dataset, named VQA4ASD, for collecting VQA-driven eye-tracking data from 2-to-6-year-old ASD and healthy children. To the best of our knowledge, this is the first dataset focusing on the early diagnosis of preschool children, which could facilitate the community to understand and explore the visual behaviors of ASD children; Third, we further develop a VQA-guided cooperative ASD screening network (VQA-CASN), in which both task-agnostic and task-specific visual scanpaths are explored simultaneously for ASD screening. Extensive experiments demonstrate that the proposed VQA-CASN achieves competitive performance with the proposed VQA-driven eye-tracking paradigm. The code and dataset will be publicly available.
Visual Question Answering Driven Eye Tracking Paradigm for Identifying Children with Autism Spectrum Disorder
[ "Jiansong Qi", "Yaping Huang", "Ying Zhang", "Sihui Zhang", "Mei Tian", "Yi Tian", "Fanchao Meng", "Lin Guan", "Tianyi Chang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=MNvMDbaQXa
@inproceedings{ zhang2024vmambasci, title={Vmamba{SCI}: Dynamic Deep Unfolding Network with Mamba for Compressive Spectral Imaging}, author={Mingjin Zhang and Longyi Li and Wenxuan SHI and Jie Guo and Yunsong Li and Xinbo Gao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=MNvMDbaQXa} }
Snapshot spectral compressive imaging can capture spectral information across multiple wavelengths in one imaging. The method, coded aperture snapshot spectral imaging (CASSI), aims to recover 3D spectral cubes from 2D measurements. Most existing methods employ deep unfolding framework based on Transformer, which alternately address a data subproblem and a prior subproblem. However, these frameworks lack flexibility regarding the sensing matrix and inter-stage interactions. In addition, the quadratic computational complexity of global Transformer and the restricted receptive field of local Transformer impact reconstruction efficiency and accuracy. In this paper, we propose a dynamic deep unfolding network with mamba for compressive spectral imaging, called VmambaSCI. We integrate spatial-spectral information of the sensing matrix into the data module and employs spatial adaptive operations in the stage interaction of the prior module. Furthermore, we develop a dual-domain scanning mamba (DSMamba), featuring a novel spatial-channel scanning method for enhanced efficiency and accuracy. To our knowledge, this is the first Mamba-based model for compressive spectral imaging. Experimental results on the public database demonstrate the superiority of the proposed VmambaSCI over the state-of-the-art approaches.
VmambaSCI: Dynamic Deep Unfolding Network with Mamba for Compressive Spectral Imaging
[ "Mingjin Zhang", "Longyi Li", "Wenxuan SHI", "Jie Guo", "Yunsong Li", "Xinbo Gao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=MM4xkBF9fv
@inproceedings{ gao2024learning, title={Learning Optimal Combination Patterns for Lightweight Stereo Image Super-Resolution}, author={Hu Gao and Jing Yang and Ying Zhang and Jingfan Yang and Bowen Ma and Depeng Dang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=MM4xkBF9fv} }
Stereo image super-resolution (stereoSR) strives to improve the quality of super-resolution by leveraging the auxiliary information provided by another perspective. Most approaches concentrate on refining module design, and stacking massive network blocks to extract and integrate information. Although there have been advancements, the memory and computation costs are increasing as well. To tackle this issue, we propose a lattice structure that autonomously learns the optimal combination patterns of network blocks, which enables the efficient and precise acquisition of feature representations, and ultimately achieves lightweight stereoSR. Specifically, we draw inspiration from the lattice phase equalizer and design lattice stereo NAFBlock (LSNB) to bridge pairs of NAFBlocks using re-weight block (RWBlock) through a coupled butterfly-style topological structures. RWBlock empowers LSNB with the capability to explore various combination patterns of pairwise NAFBlocks by adaptive re-weighting of feature. Moreover, we propose a lattice stereo attention module (LSAM) to search and transfer the most relevant features from another view. The resulting tightly interlinked architecture, named as LSSR, extensive experiments demonstrate that our method performs competitively to the state-of-the-art.
Learning Optimal Combination Patterns for Lightweight Stereo Image Super-Resolution
[ "Hu Gao", "Jing Yang", "Ying Zhang", "Jingfan Yang", "Bowen Ma", "Depeng Dang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=MJvG3qKu1v
@inproceedings{ luo2024freeenhance, title={FreeEnhance: Tuning-Free Image Enhancement via Content-Consistent Noising-and-Denoising Process}, author={Yang Luo and Yiheng Zhang and Zhaofan Qiu and Ting Yao and Zhineng Chen and Yu-Gang Jiang and Tao Mei}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=MJvG3qKu1v} }
The emergence of text-to-image generation models has led to the recognition that image enhancement, performed as post-processing, would significantly improve the visual quality of the generated images. Exploring diffusion models to enhance the generated images nevertheless is not trivial and necessitates to delicately enrich plentiful details while preserving the visual appearance of key content in the original image. In this paper, we propose a novel framework, namely FreeEnhance, for content-consistent image enhancement using the off-the-shelf image diffusion models. Technically, FreeEnhance is a two-stage process that firstly adds random noise to the input image and then capitalizes on a pre-trained image diffusion model (i.e., Latent Diffusion Models) to denoise and enhance the image details. In the noising stage, FreeEnhance is devised to add lighter noise to the region with higher frequency to preserve the high-frequent patterns (e.g., edge, corner) in the original image. In the denoising stage, we present three target properties as constraints to regularize the predicted noise, enhancing images with high acutance and high visual quality. Extensive experiments conducted on the HPDv2 dataset demonstrate that our FreeEnhance outperforms the state-of-the-art image enhancement models in terms of quantitative metrics and human preference. More remarkably, FreeEnhance also shows higher human preference compared to the commercial image enhancement solution of Magnific AI.
FreeEnhance: Tuning-Free Image Enhancement via Content-Consistent Noising-and-Denoising Process
[ "Yang Luo", "Yiheng Zhang", "Zhaofan Qiu", "Ting Yao", "Zhineng Chen", "Yu-Gang Jiang", "Tao Mei" ]
Conference
poster
2409.07451
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=MAgFiw3yHG
@inproceedings{ wang2024semantic, title={Semantic Distillation from Neighborhood for Composed Image Retrieval}, author={Yifan Wang and Wuliang Huang and Lei Li and Chun Yuan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=MAgFiw3yHG} }
The challenging task composed image retrieval targets at identifying the matched image from the multi-modal query with a reference image and a textual modifier. Most existing methods are devoted to composing the unified query representations from the query images and texts, yet the distribution gaps between the hybrid-modal query representations and visual target representations are neglected. However, directly incorporating target features on the query may cause ambiguous rankings and poor robustness due to the insufficient exploration of the distinguishments and overfitting issues. To address the above concerns, we propose a novel framework termed SemAntic Distillation from Neighborhood (SADN) for composed image retrieval. For mitigating the distribution divergences, we construct neighborhood sampling from the target domain for each query and further aggregate neighborhood features with adaptive weights to restructure the query representations. Specifically, the adaptive weights are determined by the collaboration of two individual modules, as correspondence-induced adaption and divergence-based correction. Correspondence-induced adaption accounts for capturing the correlation alignments from neighbor features under the guidance of the positive representations, and the divergence-based correction regulates the weights based on the embedding distances between hard negatives and the query in the latent space. Extensive experimental results and ablation studies on CIRR and FashionIQ validate that the proposed semantic distillation from neighborhood significantly outperforms baseline methods.
Semantic Distillation from Neighborhood for Composed Image Retrieval
[ "Yifan Wang", "Wuliang Huang", "Lei Li", "Chun Yuan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=M87zOOryOG
@inproceedings{ zheng2024speech, title={Speech Reconstruction from Silent Lip and Tongue Articulation by Diffusion Models and Text-Guided Pseudo Target Generation}, author={Rui-Chen Zheng and Yang Ai and Zhen-Hua Ling}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=M87zOOryOG} }
This paper studies the task of speech reconstruction from ultrasound tongue images and optical lip videos recorded in a silent speaking mode, where people only activate their intra-oral and extra-oral articulators without producing real speech. This task falls under the umbrella of articulatory-to-acoustic (A2A) conversion and may also be referred to as a silent speech interface. To overcome the domain discrepancy between silent and standard vocalized articulation, we introduce a novel pseudo target generation strategy. It integrates the text modality to align with articulatory movements, thereby guiding the generation of pseudo acoustic features for supervised training on speech reconstruction from silent articulation. Furthermore, we propose to employ a denoising diffusion probabilistic model as the fundamental architecture for the A2A conversion task and train the model using a combined training approach with the generated pseudo acoustic features. Experiments show that our proposed method significantly improves the intelligibility and naturalness of the reconstructed speech in the silent speaking mode compared to all baseline methods. Specifically, the word error rate of the reconstructed speech decreases by approximately 5\% when measured using an automatic speech recognition engine for intelligibility assessment, and the subjective mean opinion score for naturalness improves by 0.14. Moreover, analytical experiments reveal that the proposed pseudo target generation strategy can generate pseudo acoustic features that synchronize better with articulatory movements than previous strategies. Samples are available at our project page.
Speech Reconstruction from Silent Lip and Tongue Articulation by Diffusion Models and Text-Guided Pseudo Target Generation
[ "Rui-Chen Zheng", "Yang Ai", "Zhen-Hua Ling" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=M4XBOWqXnP
@inproceedings{ he2024tutcrs, title={{TUT}4{CRS}: Time-aware User-preference Tracking for Conversational Recommendation System}, author={Dongxiao He and Jinghan Zhang and Xiaobao Wang and Meng Ge and Zhiyong Feng and Longbiao Wang and Xiaoke Ma}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=M4XBOWqXnP} }
The Conversational Recommendation System (CRS) aims to capture user dynamic preferences and provide item recommendations based on multi-turn conversations. However, effectively modeling these dynamic preferences faces challenges due to conversational limitations, which mainly manifests as limited turns in a conversation (quantity aspect) and low compliance with queries (quality aspect). Previous studies often address these challenges in isolation, overlooking their interconnected nature. The fundamental issue underlying both problems lies in the potential abrupt changes in user preferences, to which CRS may not respond promptly. We acknowledge that user preferences are influenced by temporal factors, serving as a bridge between conversation quantity and quality. Therefore, we propose a more comprehensive CRS framework called Time-aware User-preference Tracking for Conversational Recommendation System (TUT4CRS), leveraging time dynamics to tackle both issues simultaneously. Specifically, we construct a global time interaction graph to incorporate rich external information and establish a local time-aware weight graph based on this information to adeptly select queries and effectively model user dynamic preferences. Extensive experiments on two real-world datasets validate that TUT4CRS can significantly improve recommendation performance while reducing the number of conversation turns.
TUT4CRS: Time-aware User-preference Tracking for Conversational Recommendation System
[ "Dongxiao He", "Jinghan Zhang", "Xiaobao Wang", "Meng Ge", "Zhiyong Feng", "Longbiao Wang", "Xiaoke Ma" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=M33uCfAfgG
@inproceedings{ he2024textprompt, title={Text-prompt Camouflaged Instance Segmentation with Graduated Camouflage Learning}, author={zhentao he and Changqun Xia and Shengye Qiao and Jia Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=M33uCfAfgG} }
Camouflaged instance segmentation (CIS) aims to seamlessly detect and segment objects blending with their surroundings. While existing CIS methods rely heavily on fully-supervised training with massive precisely annotated data, consuming considerable annotation efforts yet struggling to segment highly camouflaged objects accurately. Despite their visual similarity to the background, camouflaged objects differ semantically. Since text associated with images offers explicit semantic cues to underscore this difference, in this paper we propose a novel approach: the first \textbf{T}ext-\textbf{P}rompt based weakly-supervised camouflaged instance segmentation method named TPNet, leveraging semantic distinctions for effective segmentation. Specifically, TPNet operates in two stages: initiating with the generation of pseudo masks followed by a self-training process. In the pseudo mask generation stage, we innovatively align text prompts with images using a pre-training language-image model to obtain region proposals containing camouflaged instances and specific text prompt. Additionally, a Semantic-Spatial Iterative Fusion module is ingeniously designed to assimilate spatial information with semantic insights, iteratively refining pseudo mask. In the following stage, we employ Graduated Camouflage Learning, a straightforward self-training optimization strategy that evaluates camouflage levels to sequence training from simple to complex images, facilitating for an effective learning gradient. Through the collaboration of the dual phases, our method offers a comprehensive experiment on two common benchmark and demonstrates a significant advancement, delivering a novel solution that bridges the gap between weak-supervised and high camouflaged instance segmentation.
Text-prompt Camouflaged Instance Segmentation with Graduated Camouflage Learning
[ "zhentao he", "Changqun Xia", "Shengye Qiao", "Jia Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Ls7KytN8C8
@inproceedings{ zong2024toward, title={Toward Explainable Physical Audiovisual Commonsense Reasoning}, author={Daoming Zong and Chaoyue Ding and Kaitao Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Ls7KytN8C8} }
For AI systems to be safely and reliably grounded in the real world, they should possess the ability of physical commonsense reasoning, i.e. they are desired to understand the physical properties, affordances, and maneuverability of objects in everyday life. Physical commonsense reasoning is essentially a multisensory task as physical properties of objects are manifested through multiple perception modalities, including both visual and auditory. In this study, we constructed two new benchmarks, called PACS-Reason and PACS-Reason+, for explainable physical audiovisual commonsense reasoning (EPACS), in which each datapoint is accompanied by a golden detailed rationale (intermediate reasoning path) to explain the answer selection. Moreover, we present PAVC-Reasoner, a multimodal large language model (LLM) designed to reason about physical commonsense attributes. The model aligns different modalities with the language modality by integrating three different perceivers for cross-modal pretraining and instruction finetuning at multiple granularities. It utilizes an LLM as a cognitive engine to process multimodal inputs and output convincing intermediate reasoning paths as justification for inferring answers. Numerous experiments have demonstrated the effectiveness and superiority of PAVC-Reasoner as a baseline model for studying EPACS. Most attractively, PAVC-Reasoner is capable of reasoning and obtaining strong interpretable explicit reasoning paths, signifying a significant stride towards real-world physical commonsense reasoning.
Toward Explainable Physical Audiovisual Commonsense Reasoning
[ "Daoming Zong", "Chaoyue Ding", "Kaitao Chen" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Lpbabf0Kyd
@inproceedings{ zhang2024embracing, title={Embracing Domain Gradient Conflicts: Domain Generalization Using Domain Gradient Equilibrium}, author={ZUYU ZHANG and YAN LI and Byeong-Seok Shin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Lpbabf0Kyd} }
Single domain generalization (SDG) aims to learn a generalizable model from only one source domain available to unseen target domains. Existing SDG techniques rely on data or feature augmentation to generate distributions that complement the source domain. However, these approaches fail to address the challenge where gradient conflicts from synthesized domains impede the learning of domain-invariant representation. Inspired by the concept of mechanical equilibrium in physics, we propose a novel conflict-aware approach named domain gradient equilibrium for SDG. Unlike prior conflict-aware SDG methods that alleviate the gradient conflicts by setting them to zero or random values, the proposed domain gradient equilibrium method first decouples gradients into domain-invariant and domain-specific components. The domain-specific gradients are then adjusted and reweighted to achieve equilibrium, steering the model optimization toward a domain-invariant direction to enhance generalization capability. We conduct comprehensive experiments on four image recognition benchmarks, and our method achieves an accuracy improvement of 2.94% in the PACS dataset over existing state-of-the-art approaches, demonstrating the effectiveness of our proposed approach.
Embracing Domain Gradient Conflicts: Domain Generalization Using Domain Gradient Equilibrium
[ "ZUYU ZHANG", "YAN LI", "Byeong-Seok Shin" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Lk9hIPV5uu
@inproceedings{ zhe2024multigranularity, title={Multi-Granularity Hand Action Detection}, author={Ting Zhe and Jing Zhang and Yongqian Li and Yong Luo and Han Hu and Dacheng Tao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Lk9hIPV5uu} }
Detecting hand actions in videos is crucial for understanding video content and has diverse real-world applications. Existing approaches often focus on whole-body actions or coarse-grained action categories, lacking fine-grained hand-action localization information. To fill this gap, we introduce the FHA-Kitchens (Fine-Grained Hand Actions in Kitchen Scenes) dataset, providing both coarse- and fine-grained hand action categories along with localization annotations. This dataset comprises 2,377 video clips and 30,047 frames, annotated with approximately 200k bounding boxes and 880 action categories. Evaluation of existing action detection methods on FHA-Kitchens reveals varying generalization capabilities across different granularities. To handle multi-granularity in hand actions, we propose MG-HAD, an End-to-End Multi-Granularity Hand Action Detection method. It incorporates two new designs: Multi-dimensional Action Queries and Coarse-Fine Contrastive Denoising. Extensive experiments demonstrate MG-HAD's effectiveness for multi-granularity hand action detection, highlighting the significance of FHA-Kitchens for future research and real-world applications. The dataset and source code will be released.
Multi-Granularity Hand Action Detection
[ "Ting Zhe", "Jing Zhang", "Yongqian Li", "Yong Luo", "Han Hu", "Dacheng Tao" ]
Conference
poster
2306.10858
[ "https://github.com/superz678/mg-had" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=LhcOWS7opA
@inproceedings{ lu2024handrefiner, title={HandRefiner: Refining Malformed Hands in Generated Images by Diffusion-based Conditional Inpainting}, author={Wenquan Lu and Yufei Xu and Jing Zhang and Chaoyue Wang and Dacheng Tao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=LhcOWS7opA} }
Diffusion models have achieved remarkable success in generating realistic images but suffer from generating accurate human hands, such as incorrect finger counts or irregular shapes. This difficulty arises from the complex task of learning the physical structure and pose of hands from training images, which involves extensive deformations and occlusions. For correct hand generation, our paper introduces a lightweight post-processing solution called $\textbf{HandRefiner}$. HandRefiner employs a conditional inpainting approach to rectify malformed hands while leaving other parts of the image untouched. We leverage the hand mesh reconstruction model that consistently adheres to the correct number of fingers and hand shape, while also being capable of fitting the desired hand pose in the generated image. Given a generated failed image due to malformed hands, we utilize ControlNet modules to re-inject such correct hand information. Additionally, we uncover a phase transition phenomenon within ControlNet as we vary the control strength. It enables us to take advantage of more readily available synthetic data without suffering from the domain gap between realistic and synthetic hands. Experiments demonstrate that HandRefiner can significantly improve the generation quality quantitatively and qualitatively. The code will be released.
HandRefiner: Refining Malformed Hands in Generated Images by Diffusion-based Conditional Inpainting
[ "Wenquan Lu", "Yufei Xu", "Jing Zhang", "Chaoyue Wang", "Dacheng Tao" ]
Conference
poster
2311.17957
[ "https://github.com/wenquanlu/handrefiner" ]
https://huggingface.co/papers/2311.17957
1
0
0
5
[ "gxkok/control_sd15_depth_hand" ]
[]
[]
[ "gxkok/control_sd15_depth_hand" ]
[]
[]
1
null
https://openreview.net/forum?id=LgDMOie21k
@inproceedings{ zhu2024perceptualdistortion, title={Perceptual-Distortion Balanced Image Super-Resolution is a Multi-Objective Optimization Problem}, author={Qiwen Zhu and Yanjie Wang and Shilv Cai and Liqun Chen and Jiahuan Zhou and Luxin Yan and Sheng Zhong and Xu Zou}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=LgDMOie21k} }
In this paper, we introduce a novel approach to single-image super-resolution (SISR) that balances perceptual quality and distortion through multi-objective optimization (MOO). Traditional pixel-based distortion metrics like PSNR and SSIM often fail to align with human perceptual quality, resulting in blurry outputs despite high scores. To address this, we propose the Multi-Objective Bayesian Optimization Super-Resolution (MOBOSR) framework, which dynamically adjusts loss weights during training. This reduces the need for manual hyperparameter tuning and lessens computational demands compared to AutoML. Our method conceptualizes the relationship between loss weights and image quality assessment (IQA) metrics as black-box objective functions, optimized to achieve an optimal perception-distortion Pareto frontier. Extensive experiments demonstrate that MOBOSR surpasses current state-of-the-art methods in both perception and distortion, significantly advancing the perception-distortion Pareto frontier. Our work lays a foundation for future exploration of the balance between perceptual quality and fidelity in image restoration tasks. Source codes and pretrained models are available at: https://github.com/ZhuKeven/MOBOSR.
Perceptual-Distortion Balanced Image Super-Resolution is a Multi-Objective Optimization Problem
[ "Qiwen Zhu", "Yanjie Wang", "Shilv Cai", "Liqun Chen", "Jiahuan Zhou", "Luxin Yan", "Sheng Zhong", "Xu Zou" ]
Conference
oral
2409.03179
[ "https://github.com/ZhuKeven/MOBOSR" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Le8wLGFIDe
@inproceedings{ guo2024rdplanes, title={R4D-planes: Remapping Planes For Novel View Synthesis and Self-Supervised Decoupling of Monocular Videos}, author={Junyuan Guo and Hao Tang and Teng Wang and Chao Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Le8wLGFIDe} }
The view synthesis and decoupling of dynamic objects from the static environment in monocular video are both long-standing challenges in CV and CG. Most of the previous NeRF-based methods rely on implicit representation, which require additional supervision and training time. Later, various explicit representations have been applied to the task of novel view synthesis for dynamic scenes, such as multi-planes or 3D gaussian splatting. They usually encode the dynamics by introducing an additional time dimension or a deformation field. These methods greatly reduce the time consumption, but still fail to achieve high rendering quality in some scenes, especially for some real scenes. For the latter decoupling problem, previous neural radiation field methods require frequent tuning of the relevant parameters for different scenes, which is very inconvenient for practical use. We consider above problems and propose a new representation of dynamic scenes based on tensor decomposition, which we call R4D-planes. The key to our method is remapping, which compensates for the shortcomings of the plane structure by fusing space-time information and remapping to new indexes. Furthermore, we implement a new decoupling structure, which can efficiently decouple dynamic and static scenes in a self-supervised manner. Experimental results show our method achieves better rendering quality and training efficiency in both view synthesis and decoupling tasks for monocular scenes.
R4D-planes: Remapping Planes For Novel View Synthesis and Self-Supervised Decoupling of Monocular Videos
[ "Junyuan Guo", "Hao Tang", "Teng Wang", "Chao Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=LagXbTuzYW
@inproceedings{ chen2024onechart, title={OneChart: Purify the Chart Structural Extraction via One Auxiliary Token}, author={Jinyue Chen and Lingyu Kong and Haoran Wei and Chenglong Liu and Zheng Ge and Liang Zhao and Jianjian Sun and Chunrui Han and Xiangyu Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=LagXbTuzYW} }
Chart parsing poses a significant challenge due to the diversity of styles, values, texts, and so forth. Even advanced large vision-language models (LVLMs) with billions of parameters struggle to handle such tasks satisfactorily. To address this, we propose OneChart: a reliable agent specifically devised for the structural extraction of chart information. Similar to popular LVLMs, OneChart incorporates an autoregressive main body. Uniquely, to enhance the reliability of the numerical parts of the output, we introduce an auxiliary token placed at the beginning of the total tokens along with an additional decoder. The numerically optimized (auxiliary) token allows subsequent tokens for chart parsing to capture enhanced numerical features through causal attention. Furthermore, with the aid of the auxiliary token, we have devised a self-evaluation mechanism that enables the model to gauge the reliability of its chart parsing results by providing confidence scores for the generated content. Compared to current state-of-the-art (SOTA) chart parsing models, e.g., DePlot, ChartVLM, ChartAst, OneChart significantly outperforms in Average Precision (AP) for chart structural extraction across multiple public benchmarks, despite enjoying only 0.2 billion parameters. Moreover, as a chart parsing agent, it also brings 10%+ accuracy gains for the popular LVLM (LLaVA-1.6) in the downstream ChartQA benchmark.
OneChart: Purify the Chart Structural Extraction via One Auxiliary Token
[ "Jinyue Chen", "Lingyu Kong", "Haoran Wei", "Chenglong Liu", "Zheng Ge", "Liang Zhao", "Jianjian Sun", "Chunrui Han", "Xiangyu Zhang" ]
Conference
oral
2404.09987
[ "https://github.com/lingyvkong/onechart" ]
https://huggingface.co/papers/2404.09987
2
2
0
9
[ "kppkkp/OneChart" ]
[ "kppkkp/ChartSE" ]
[]
[ "kppkkp/OneChart" ]
[ "kppkkp/ChartSE" ]
[]
1
null
https://openreview.net/forum?id=LV8v20O1Vm
@inproceedings{ zhu2024weaksam, title={Weak{SAM}: Segment Anything Meets Weakly-supervised Instance-level Recognition}, author={Lianghui Zhu and Junwei Zhou and Yan Liu and Hao Xin and Wenyu Liu and Xinggang Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=LV8v20O1Vm} }
Weakly-supervised visual recognition using inexact supervision is a critical yet challenging learning problem. It significantly reduces human labeling costs and traditionally relies on multi-instance learning and pseudo-labeling. This paper introduces WeakSAM and solves the weakly-supervised object detection (WSOD) and segmentation by utilizing the pre-learned world knowledge contained in a vision foundation model, i.e., the Segment Anything Model (SAM). WeakSAM addresses two critical limitations in traditional WSOD retraining, i.e., pseudo ground truth (PGT) incompleteness and noisy PGT instances, through adaptive PGT generation and Region of Interest (RoI) drop regularization. It also addresses the SAM's shortcomings of requiring human prompts and category unawareness in object detection and segmentation. Our results indicate that WeakSAM significantly surpasses previous state-of-the-art methods in WSOD and WSIS benchmarks with large margins, i.e. average improvements of 7.4% and 8.5%, respectively.
WeakSAM: Segment Anything Meets Weakly-supervised Instance-level Recognition
[ "Lianghui Zhu", "Junwei Zhou", "Yan Liu", "Hao Xin", "Wenyu Liu", "Xinggang Wang" ]
Conference
poster
2402.14812
[ "https://github.com/hustvl/weaksam" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=LTTplX1AJm
@inproceedings{ liu2024listenformer, title={ListenFormer: Responsive Listening Head Generation with Non-autoregressive Transformers}, author={Miao Liu and Jing Wang and Xinyuan Qian and Haizhou Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=LTTplX1AJm} }
As one of the crucial elements in human-robot interaction, responsive listening head generation has attracted considerable attention from researchers. It aims to generate a listening head video based on speaker's audio and video as well as a reference listener image. However, existing methods exhibit two limitations: 1) the generation capability of their models is limited, resulting in generated videos that are far from real ones, and 2) they mostly employ autoregressive generative models, unable to mitigate the risk of error accumulation. To tackle these issues, we propose Listenformer that leverages the powerful temporal modeling capability of transformers for generation. It can perform non-autoregressive prediction with the proposed two-stage training method, simultaneously achieving temporal continuity and overall consistency in the outputs. To fully utilize the information from the speaker inputs, we designed an audio-motion attention fusion module, which improves the correlation of audio and motion features for accurate response. Additionally, a novel decoding method called sliding window with a large shift is proposed for Listenformer, demonstrating both excellent computational efficiency and effectiveness. Extensive experiments show that Listenformer outperforms the existing state-of-the-art methods on ViCo and L2L datasets. And a perceptual user study demonstrates the comprehensive performance of our method in generating diversity, identity preserving, speaker-listener synchronization, and attitude matching.
ListenFormer: Responsive Listening Head Generation with Non-autoregressive Transformers
[ "Miao Liu", "Jing Wang", "Xinyuan Qian", "Haizhou Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=LRdE67ATPN
@inproceedings{ sun2024fdgs, title={F-3{DGS}: Factorized Coordinates and Representations for 3D Gaussian Splatting}, author={Xiangyu Sun and Joo Chan Lee and Daniel Rho and Jong Hwan Ko and Usman Ali and Eunbyung Park}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=LRdE67ATPN} }
The neural radiance field (NeRF) has made significant strides in representing 3D scenes and synthesizing novel views. Despite its advancements, the high computational costs of NeRF have posed challenges for its deployment in resource-constrained environments and real-time applications. As an alternative to NeRF-like neural rendering methods, 3D Gaussian Splatting (3DGS) offers rapid rendering speeds while maintaining excellent image quality. However, as it represents objects and scenes using a myriad of Gaussians, it requires substantial storage to achieve high-quality representation. To mitigate the storage overhead, we propose Factorized 3D Gaussian Splatting (F-3DGS), a novel approach that drastically reduces storage requirements while preserving image quality. Inspired by classical matrix and tensor factorization techniques, our method represents and approximates dense clusters of Gaussians with significantly fewer Gaussians through efficient factorization. We aim to efficiently represent dense 3D Gaussians by approximating them with a limited amount of information for each axis and their combinations. This method allows us to encode a substantially large number of Gaussians along with their essential attributes---such as color, scale, and rotation---necessary for rendering using a relatively small number of elements. Extensive experimental results demonstrate that F-3DGS achieves a significant reduction in storage costs while maintaining comparable quality in rendered images.
F-3DGS: Factorized Coordinates and Representations for 3D Gaussian Splatting
[ "Xiangyu Sun", "Joo Chan Lee", "Daniel Rho", "Jong Hwan Ko", "Usman Ali", "Eunbyung Park" ]
Conference
poster
2405.17083
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=LOJdqiqbAt
@inproceedings{ chen2024progressive, title={Progressive Point Cloud Denoising with Cross-Stage Cross-Coder Adaptive Edge Graph Convolution Network}, author={Wu Chen and Hehe Fan and Qiuping Jiang and Chao Huang and Yi Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=LOJdqiqbAt} }
Due to the limitation of collection device and unstable scanning process, point cloud data is usually noisy. Those noise deforms the underlying structures of point clouds and inevitably affects downstream tasks such as rendering, reconstruction and analysis. In this paper, we propose a Cross-stage Cross-coder Adaptive Edge Graph Convolution Network (C$^{2}$AENet) to denoise point clouds. Our network uses multiple stages to progressively and iteratively denoise points. To improve the effectiveness, we add connections between two stages and between the encoder and decoder, leading to the cross-stage cross-coder architecture. Additionally, existing graph-based point cloud learning methods tend to capture local structure. They typically construct a semantic graph based on semantic distance, which may ignore Euclidean neighbors and lead to insufficient geometry perception. Therefore, we introduce a geometric graph and adaptively calculate edge attention based on the local and global structural information of the points. This results in a novel graph convolution module that allows the network to capture richer contextual information and focus on more important parts. Extensive experiments demonstrate that the proposed method is competitive compared with other state-of-the-art methods. The code will be made publicly available.
Progressive Point Cloud Denoising with Cross-Stage Cross-Coder Adaptive Edge Graph Convolution Network
[ "Wu Chen", "Hehe Fan", "Qiuping Jiang", "Chao Huang", "Yi Yang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=LBMJBUiegp
@inproceedings{ hu2024prompting, title={Prompting to Adapt Foundational Segmentation Models}, author={Jie Hu and Jie Li and Yue Ma and Liujuan Cao and Songan Zhang and Wei Zhang and GUANNAN JIANG and Rongrong Ji}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=LBMJBUiegp} }
Foundational segmentation models, predominantly trained on scenes typical of natural environments, struggle to generalize across varied image domains. Traditional "training-to-adapt" methods rely heavily on extensive data retraining and model architectures modifications. This significantly limits the models' generalization capabilities and efficiency in deployment. In this study, we propose a novel adaptation paradigm, termed "prompting-to-adapt", to tackle the above issue by introducing an innovative image prompter. This prompter generates domain-specific prompts through few-shot image-mask pairs, incorporating diverse image processing techniques to enhance adaptability. To tackle the inherent non-differentiability of image prompts, we further devise an information-estimation-based gradient descent strategy that leverages the information entropy of image processing combinations to optimize the prompter, ensuring effective adaptation. Through extensive experiments across nine datasets spanning seven image domains (\emph{i.e.}, depth, thermal, camouflage, endoscopic, ultrasound, grayscale, and natural) and four scenarios (\emph{i.e.}, common scenes, camouflage objects, medical images, and industrial data), we demonstrate that our approach significant improves the foundational models' adaptation capabilities. Moreover, the interpretability of the generated prompts provides insightful revelations into their image processing mechanisms. Our source code will be publicly available to foster further innovation and exploration in this field.
Prompting to Adapt Foundational Segmentation Models
[ "Jie Hu", "Jie Li", "Yue Ma", "Liujuan Cao", "Songan Zhang", "Wei Zhang", "GUANNAN JIANG", "Rongrong Ji" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=L99kOQk12i
@inproceedings{ wu2024mmhead, title={{MMH}ead: Towards Fine-grained Multi-modal 3D Facial Animation}, author={Sijing Wu and Yunhao Li and Yichao Yan and Huiyu Duan and Ziwei Liu and Guangtao Zhai}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=L99kOQk12i} }
3D facial animation has attracted considerable attention due to its extensive applications in the multimedia field. Audio-driven 3D facial animation has been widely explored with promising results. However, multi-modal 3D facial animation, especially text-guided 3D facial animation is rarely explored due to the lack of multi-modal 3D facial animation dataset. To fill this gap, we first construct a large-scale multi-modal 3D facial animation dataset, MMHead, which consists of 49 hours of 3D facial motion sequences, speech audios, and rich hierarchical text annotations. Each text annotation contains abstract action and emotion descriptions, fine-grained facial and head movements (i.e., expression and head pose) descriptions, and three possible scenarios that may cause such emotion. Concretely, we integrate five public 2D portrait video datasets, and propose an automatic pipeline to 1) reconstruct 3D facial motion sequences from monocular videos; and 2) obtain hierarchical text annotations with the help of AU detection and ChatGPT. Based on the MMHead dataset, we establish benchmarks for two new tasks: text-induced 3D talking head animation and text-to-3D facial motion generation. Moreover, a simple but efficient VQ-VAE-based method named MM2Face is proposed to unify the multi-modal information and generate diverse and plausible 3D facial motions, which achieves competitive results on both benchmarks. Extensive experiments and comprehensive analysis demonstrate the significant potential of our dataset and benchmarks in promoting the development of multi-modal 3D facial animation.
MMHead: Towards Fine-grained Multi-modal 3D Facial Animation
[ "Sijing Wu", "Yunhao Li", "Yichao Yan", "Huiyu Duan", "Ziwei Liu", "Guangtao Zhai" ]
Conference
poster
2410.07757
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=L839OWoifx
@inproceedings{ shi2024selfderived, title={Self-derived Knowledge Graph Contrastive Learning for Recommendation}, author={Lei Shi and Jiapeng Yang and Pengtao Lv and Lu Yuan and Feifei Kou and Jia Luo and Mingying Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=L839OWoifx} }
Knowledge Graphs (KGs) serve as valuable auxiliary information to improve the accuracy of recommendation systems. Previous methods have leveraged the knowledge graph to enhance item representation and thus achieve excellent performance. However, these approaches heavily rely on high-quality knowledge graphs and learn enhanced representations with the assistance of carefully designed triplets. Furthermore, the emergence of knowledge graphs has led to models that ignore the inherent relationships between items and entities. To address these challenges, we propose a Self-Derived Knowledge Graph Contrastive Learning framework (CL-SDKG) to enhance recommendation systems. Specifically, we employ the variational graph reconstruction technique to estimate the Gaussian distribution of user-item nodes corresponding to the graph neural network aggregation layer. This process generates multiple KGs, referred to as self-derived KGs. The self-derived KG acquires more robust perceptual representations through the consistency of the estimated structure. Besides, the self-derived KG allows models to focus on user-item interactions and reduce the negative impact of miscellaneous dependencies introduced by conventional KGs. Finally, we apply contrastive learning to the self-derived KG to further improve the robustness of CL-SDKG through the traditional KG contrast-enhanced process. We conducted comprehensive experiments on three public datasets, and the results demonstrate that our CL-SDKG outperforms state-of-the-art baselines.
Self-derived Knowledge Graph Contrastive Learning for Recommendation
[ "Lei Shi", "Jiapeng Yang", "Pengtao Lv", "Lu Yuan", "Feifei Kou", "Jia Luo", "Mingying Xu" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=L3yugOksj6
@inproceedings{ li2024thinking, title={Thinking Temporal Automatic White Balance: Datasets, Models and Benchmarks}, author={chunxiao Li and Shuyang Wang and Xuejing Kang and Anlong Ming}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=L3yugOksj6} }
Temporal Automatic White Balance (TAWB) corrects the color cast within each frame, while ensuring consistent illumination across consecutive frames. Unlike conventional AWB, there has been limited research conducted on TAWB for an extended period. However, the growing popularity of short-form videos has increased focus on video color experiences. To further advance research on TAWB, we aim to address the bottlenecks associated with datasets, models, and benchmarks. 1) Dataset challenge: Currently, only one TAWB dataset (BCC), captured with a single camera, is available. It lacks temporal continuity due to challenges in capturing realistic illuminations and dynamic raw data. In response, we meticulously designed an acquisition strategy based on the actual distribution pattern of illuminations and created a comprehensive TAWB dataset named CTA comprising 6 cameras that offer 12K continuous illuminations. Furthermore, we employed video frame interpolation techniques, extending the captured static raw data into dynamic form and ensuring continuous illumination. 2) Model challenge: Among the two prevailing TAWB methods, both rely on LSTM. However, the fixed gating mechanism of LSTM often fails to adapt to varying content or illuminations, resulting in unstable illumination estimation. In response, we propose CTANet, which integrates cross-frame attention and RepViT for self-adjustment to content and illumination variations. Additionally, the mobile-friendly design of RepViT enhances the portability of CTANet. 3) Benchmark challenge: Currently, there is no benchmark of TAWB methods on illumination and camera types to date. Addressing this, a benchmark has been proposed by conducting a comparative analysis of 8 cutting-edge AWB and TAWB methods with CTANet across 3 typical illumination scenes and 7 cameras from two representative datasets. Our dataset and code are available in supplementary material.
Thinking Temporal Automatic White Balance: Datasets, Models and Benchmarks
[ "chunxiao Li", "Shuyang Wang", "Xuejing Kang", "Anlong Ming" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=L18Ihhu87n
@inproceedings{ luo2024cefdet, title={Cefdet: Cognitive Effectiveness Network Based on Fuzzy Inference for Action Detection}, author={Zhe Luo and Weina Fu and Shuai Liu and Saeed Anwar and Muhammad Saqib and Sambit Bakshi and Khan Muhammad}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=L18Ihhu87n} }
Action detection and understanding provide the foundation for the generation and interaction of multimedia content. However, existing methods mainly focus on constructing complex relational inference networks, overlooking the judgment of detection effectiveness. Moreover, these methods frequently generate detection results with cognitive abnormalities. To solve the above problems, this study proposes a cognitive effectiveness network based on fuzzy inference (Cefdet), which introduces the concept of “cognition-based detection” to simulate human cognition. First, a fuzzy-driven cognitive effectiveness evaluation module (FCM) is established to introduce fuzzy inference into action detection. FCM is combined with human action features to simulate the cognition-based detection process, which clearly locates the position of frames with cognitive abnormalities. Then, a fuzzy cognitive update strategy (FCS) is proposed based on the FCM, which utilizes fuzzy logic to re-detect the cognition-based detection results and effectively update the results with cognitive abnormalities. Experimental results demonstrate that Cefdet exhibits superior performance against several mainstream algorithms on public datasets, validating its effectiveness and superiority.
Cefdet: Cognitive Effectiveness Network Based on Fuzzy Inference for Action Detection
[ "Zhe Luo", "Weina Fu", "Shuai Liu", "Saeed Anwar", "Muhammad Saqib", "Sambit Bakshi", "Khan Muhammad" ]
Conference
poster
2410.05771
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=KuHaYRftVN
@inproceedings{ huang2024advancing, title={Advancing 3D Object Grounding Beyond a Single 3D Scene}, author={Wencan Huang and Daizong Liu and Wei Hu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=KuHaYRftVN} }
As a widely explored multi-modal task, 3D object grounding endeavors to localize a unique pre-existing object within a single 3D scene given a natural language description. However, such a strict setting is unnatural as it is not always possible to know whether a target object actually exists in a specific 3D scene. In real-world scenarios, a collection of 3D scenes is generally available, some of which may not contain the described object while some potentially contain multiple target objects. To this end, we introduce a more realistic setting, named Group-wise 3D Object Grounding, to simultaneously process a group of related 3D scenes, allowing a flexible number of target objects to exist in each scene. Instead of localizing target objects in each scene individually, we argue that ignoring the rich visual information contained in other related 3D scenes within the same group may lead to sub-optimal results. To achieve more accurate localization, we propose a baseline method named GNL3D, a Grouped Neural Listener for 3D grounding in the group-wise setting, which extends the traditional 3D object grounding pipeline with a novel language-guided consensus aggregation and distribution mechanism to explicitly exploit the intra-group visual connections. Specifically, based on context-aware spatial-semantic alignment, a Language-guided Consensus Aggregation Module (LCAM) is developed to aggregate the visual features of target objects in each 3D scene to form a visual consensus representation, which is then distributed and injected into a consensus-modulated feature refinement module for refining visual features, thus benefiting the subsequent multi-modal reasoning. Furthermore, we design a curriculum strategy to promote the LCAM to learn step by step how to extract effective visual consensus with the existence of negative 3D scenes where no target object exists. To validate the effectiveness of the proposed method, we reorganize and enhance the ReferIt3D dataset and propose evaluation metrics to benchmark prior work and GNL3D. Extensive experiments demonstrate GNL3D achieves state-of-the-art results on both the group-wise setting and the traditional 3D object grounding task.
Advancing 3D Object Grounding Beyond a Single 3D Scene
[ "Wencan Huang", "Daizong Liu", "Wei Hu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=KraCCKtaAL
@inproceedings{ wang2024multihateclip, title={MultiHateClip: A Multilingual Benchmark Dataset for Hateful Video Detection on YouTube and Bilibili}, author={Han Wang and Rui Yang Tan and Usman Naseem and Roy Ka-Wei Lee}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=KraCCKtaAL} }
Hate speech is a pressing issue in modern society, with significant repercussions both online and offline. Recent research in hate speech detection has primarily centered on text-based media, largely overlooking multimodal content such as videos. Existing studies on hateful video datasets have predominantly focused on English content within a Western context and have been limited to binary labels (hateful or non-hateful), lacking detailed contextual information. This study presents $\textsf{MultiHateClip}$, an novel multilingual dataset curated through hate lexicons and human annotation. It aims to enhance the detection of hateful videos on platforms such as YouTube and Bilibili, encompassing content in both English and Chinese languages. Comprising 2,000 videos annotated for hatefulness, offensiveness, and normalcy, this dataset provides a cross-cultural perspective on gender-based hate speech. Through a detailed examination of human annotation results, we discuss the differences between Chinese and English hateful videos and underscore the importance of different modalities in hateful and offensive video analysis. Evaluations of state-of-the-art video classification models, such as $\textit{VLM}$ and $\textit{GPT-4V}$, on $\textsf{MultiHateClip}$ highlight the existing challenges in accurately distinguishing between hateful and offensive content and the urgent need for models that are both multimodally and culturally nuanced. $\textsf{MultiHateClip}$ serves as a foundational step towards developing more effective hateful video detection solutions, emphasizing the importance of a multimodal and culturally sensitive approach in the ongoing fight against online hate speech.
MultiHateClip: A Multilingual Benchmark Dataset for Hateful Video Detection on YouTube and Bilibili
[ "Han Wang", "Rui Yang Tan", "Usman Naseem", "Roy Ka-Wei Lee" ]
Conference
oral
2408.03468
[ "https://github.com/social-ai-studio/multihateclip" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=KcDJHQurtN
@inproceedings{ naseem2024vaccine, title={Vaccine Misinformation Detection in X using Cooperative Multimodal Framework}, author={Usman Naseem and Adam Dunn and Matloob Khushi and Jinman Kim}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=KcDJHQurtN} }
Identifying social media posts that spread vaccine misinformation can inform emerging public health risks and aid in designing effective communication interventions. Existing studies, while promising, often rely on single user posts, potentially leading to flawed conclusions. This highlights the necessity to model users' historical posts for a comprehensive understanding of their stance towards vaccines. However, users' historical posts may contain a diverse range of content that adds noise and leads to low performance. To address this gap, in this study, we present VaxMine, a cooperative multi-agent reinforcement learning method that automatically selects relevant textual and visual content from a user's posts, reducing noise. To evaluate the performance of the proposed method, we create and release a new dataset of 2,072 users with historical posts due to the unavailability of publicly available datasets. The experimental results show that our approach outperforms state-of-the-art methods with an F1-Score of 0.94 (an absolute increase of 13\%), demonstrating that extracting relevant content from users' historical posts and understanding both modalities are essential to identify anti-vaccine users on social media. We further analyze the robustness and generalizability of VaxMine, showing that extracting relevant textual and visual content from a user's posts improves performance. We conclude with a discussion on the practical implications of our study by explaining how computational methods used in surveillance can benefit from our work, with flow-on effects on the design of health communication interventions to counter vaccine misinformation on social media. We suggest that releasing a robustly annotated dataset will support further advances and benchmarking of methods
Vaccine Misinformation Detection in X using Cooperative Multimodal Framework
[ "Usman Naseem", "Adam Dunn", "Matloob Khushi", "Jinman Kim" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=KX5Gf2dpZ4
@inproceedings{ sun2024ifgarments, title={{IF}-Garments: Reconstructing Your Intersection-Free Multi-Layered Garments from Monocular Videos}, author={Mingyang Sun and Qipeng Yan and Zhuoer Liang and Dongliang Kou and Dingkang Yang and Ruisheng Yuan and Xiao Zhao and Mingcheng Li and Lihua Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=KX5Gf2dpZ4} }
Reconstructing garments from monocular videos has attracted considerable attention as it provides a convenient and low-cost solution for clothing digitization. In reality, people wear clothing with countless variations and multiple layers. Existing studies attempt to extract garments from a single video. They either behave poorly in generalization due to reliance on limited clothing templates or struggle to handle the intersections of multi-layered clothing leading to the lack of physical plausibility. Besides, there are inevitable and undetectable overlaps for a single video that hinder researchers from modeling complete and intersection-free multi-layered clothing. To address the above limitations, in this paper, we propose a novel method to reconstruct multi-layered clothing from multiple monocular videos sequentially, which surpasses existing work in generalization and robustness against penetration. For each video, neural fields are employed to implicitly represent the clothed body, from which the meshes with frame-consistent structures are explicitly extracted. Next, we implement a template-free method for extracting a single garment by back-projecting the image segmentation labels of different frames onto these meshes. In this way, multiple garments can be obtained from these monocular videos and then aligned to form the whole outfit. However, intersection always occurs due to overlapping deformation in the real world and perceptual errors for monocular videos. To this end, we innovatively introduce a physics-aware module that combines neural fields with a position-based simulation framework to fine-tune the penetrating vertices of garments, ensuring robustly intersection-free. Additionally, we collect a mini dataset with fashionable garments to evaluate the quality of clothing reconstruction comprehensively. The code and data will be open-sourced if this work is accepted.
IF-Garments: Reconstructing Your Intersection-Free Multi-Layered Garments from Monocular Videos
[ "Mingyang Sun", "Qipeng Yan", "Zhuoer Liang", "Dongliang Kou", "Dingkang Yang", "Ruisheng Yuan", "Xiao Zhao", "Mingcheng Li", "Lihua Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=KVWMXPpCMV
@inproceedings{ dong2024adaptive, title={Adaptive Query Selection for Camouflaged Instance Segmentation}, author={Bo Dong and Pichao WANG and Hao Luo and Fan Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=KVWMXPpCMV} }
Camouflaged instance segmentation is a challenging task due to the various aspects such as color, structure, lighting, etc., of object instances embedded in complex backgrounds. Although the current DETR-based scheme simplifies the pipeline, it suffers from a large number of object queries, leading to many false positive instances. To address this issue, we propose an adaptive query selection mechanism. Our research reveals that a large number of redundant queries scatter the extracted features of the camouflaged instances. To remove these redundant queries with weak correlation, we evaluate the importance of the object query from the perspectives of information entropy and volatility. Moreover, we observed that occlusion and overlapping instances significantly impact the accuracy of the selection mechanism. Therefore, we design a boundary location embedding mechanism that incorporates fake instance boundaries to obtain better location information for more accurate query instance matching. We conducted extensive experiments on two challenging camouflaged instance segmentation datasets, namely COD10K and NC4K, and demonstrated the effectiveness of our proposed model. Compared with the OSFormer, our model significantly improves the performance by 3.8\% AP and 5.6\% AP with less computational cost, achieving the state-of-the-art of 44.8 AP and 48.1 AP with ResNet-50 on the COD10K and NC4K test-dev sets, respectively.
Adaptive Query Selection for Camouflaged Instance Segmentation
[ "Bo Dong", "Pichao WANG", "Hao Luo", "Fan Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=KQVjmulG2I
@inproceedings{ zhu2024perfrdiff, title={Per{FRD}iff: Personalised Weight Editing for Multiple Appropriate Facial Reaction Generation}, author={Hengde Zhu and Xiangyu Kong and Weicheng Xie and Xin Huang and Linlin Shen and Lu Liu and Hatice Gunes and Siyang Song}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=KQVjmulG2I} }
Human facial reactions play crucial roles in dyadic human-human interactions, where individuals (i.e., listeners) with varying cognitive process styles may display different but appropriate facial reactions in response to an identical behaviour expressed by their conversational partners. While several existing facial reaction generation approaches are capable of generating multiple appropriate facial reactions (AFRs) in response to each given human behaviour, they fail to take human's personalised cognitive process in AFRs' generation. In this paper, we propose the first online personalised multiple appropriate facial reaction generation (MAFRG) approach which learns a unique personalised cognitive style from the target human listener's previous facial behaviours and represents it as a set of network weight shifts. These personalised weight shifts are then applied to edit the weights of a pre-trained generic MAFRG model, allowing the obtained personalised model to naturally mimic the target human listener's cognitive process in its reasoning for multiple AFRs generations. Experimental results show that our approach not only largely outperformed all existing approaches in generating more appropriate and diverse generic AFRs, but also serves as the first reliable personalised MAFRG solution. Our code is provided in the Supplementary Material.
PerFRDiff: Personalised Weight Editing for Multiple Appropriate Facial Reaction Generation
[ "Hengde Zhu", "Xiangyu Kong", "Weicheng Xie", "Xin Huang", "Linlin Shen", "Lu Liu", "Hatice Gunes", "Siyang Song" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=KOIoLNLTAW
@inproceedings{ chen2024groupaware, title={Group-aware Parameter-efficient Updating for Content-Adaptive Neural Video Compression}, author={Zhenghao Chen and Luping Zhou and Zhihao Hu and Dong Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=KOIoLNLTAW} }
Content-adaptive compression is crucial for enhancing the adaptability of the pre-trained neural codec for various contents. Although these methods have been very practical in neural image compression (NIC), their application in neural video compression (NVC) is still limited due to two main aspects: 1), video compression relies heavily on temporal redundancy, therefore updating just one or a few frames can lead to significant errors accumulating over time; 2), NVC frameworks are generally more complex, with many large components that are not easy to update quickly during encoding. To address the previously mentioned challenges, we have developed a content-adaptive NVC technique called Group-aware Parameter-Efficient Updating (GPU). Initially, to minimize error accumulation, we adopt a group-aware approach for updating encoder parameters. This involves adopting a patch-based Group of Pictures (GoP) training strategy to segment a video into patch-based GoPs, which will be updated to facilitate a globally optimized domain-transferable solution. Subsequently, we introduce a parameter-efficient delta-tuning strategy, which is achieved by integrating several light-weight adapters into each coding component of the encoding process by both serial and parallel configuration. Such architecture-agnostic modules stimulate the components with large parameters, thereby reducing both the update cost and the encoding time. We incorporate our GPU into the latest NVC framework and conduct comprehensive experiments, whose results showcase outstanding video compression efficiency across four video benchmarks and adaptability of one medical image benchmark.
Group-aware Parameter-efficient Updating for Content-Adaptive Neural Video Compression
[ "Zhenghao Chen", "Luping Zhou", "Zhihao Hu", "Dong Xu" ]
Conference
poster
2405.04274
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=KKLB4ZRboY
@inproceedings{ ma2024safesd, title={Safe-{SD}: Safe and Traceable Stable Diffusion with Text Prompt Trigger for Invisible Generative Watermarking}, author={Zhiyuan Ma and Guoli Jia and Biqing Qi and Bowen Zhou}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=KKLB4ZRboY} }
Recently, stable diffusion (SD) models have typically flourished in the field of image synthesis and personalized editing, with a range of photorealistic and unprecedented images being successfully generated. As a result, widespread interests have been ignited to develop and use various SD-based tools for visual content creations. However, the exposures of AI-created contents on public platforms could raise both legal and ethical risks. In this regard, the traditional methods of adding watermarks to the already generated images (i.e. post-processing) may face a dilemma (e.g., being erased or modified) in terms of copyright protection and content monitoring, since the powerful image inversion and text-to-image editing techniques have been widely explored in SD-based methods. In this work, we propose a $\textbf{Safe}$ and high-traceable $\textbf{S}$table $\textbf{D}$iffusion framework (namely $\textbf{Safe-SD}$) to adaptively implant the graphical watermarks (e.g., QR code) into the imperceptible structure-related pixels during generative diffusion process for supporting text-driven invisible watermarking and detection. Different previous high-cost injection-then-detection training framework, we design a simple and unified architecture, which makes it possible to simultaneously train watermark injection and detection in a single network, greatly improving the efficiency and convenience of use. Moreover, to further support text-driven generative watermarking and deeply explore its robustness and high-traceability, we elaborately design a $\lambda$-sampling and $\lambda$-encryption algorithm to fine-tune a latent diffuser wrapped by a VAE for balancing high-fidelity image synthesis and high-traceable watermark detection. We present our quantitative and qualitative results on two representative datasets LSUN, COCO and FFHQ, demonstrating state-of-the-art performance of Safe-SD and showing it significantly outperforms the previous approaches.
Safe-SD: Safe and Traceable Stable Diffusion with Text Prompt Trigger for Invisible Generative Watermarking
[ "Zhiyuan Ma", "Guoli Jia", "Biqing Qi", "Bowen Zhou" ]
Conference
poster
2407.13188
[ "" ]
https://huggingface.co/papers/2407.13188
0
0
0
4
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=KGEMdGRSUx
@inproceedings{ huang2024neighbor, title={Neighbor Does Matter: Global Positive-Negative Sampling for Vision-Language Pre-training}, author={Bin Huang and Feng He and Qi Wang and Hong Chen and Guohao Li and Zhifan Feng and Xin Wang and Wenwu Zhu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=KGEMdGRSUx} }
Sampling strategies have been widely adopted in Vision-Language Pre-training (VLP) and have achieved great success recently. However, the sampling strategies adopted by current VLP works are limited in two ways: i) they only focus on negative sampling, ignoring the importance of more informative positive samples; ii) their sampling strategies are conducted in the local in-batch level, which may lead to sub-optimal results. To tackle these problems, in this paper, we propose a Global Positive-Negative Sampling (GPN-S) framework for vision-language pre-training, which conducts both positive and negative sampling in the global level, grounded on the notion of neighborhood relationships. Specifically, our proposed GPN-S framework is capable of utilizing positive sampling to bring semantically equivalent samples closer, as well as employing negative sampling to push challenging negative samples farther away. We jointly consider them for vision-language pre-training on the global-level perspective rather than a local-level mini-batch, which provides more informative and diverse samples. We evaluate the effectiveness of the proposed GPN-S framework by conducting experiments on several common downstream tasks, and the results demonstrate significant performance improvement over the existing models.
Neighbor Does Matter: Global Positive-Negative Sampling for Vision-Language Pre-training
[ "Bin Huang", "Feng He", "Qi Wang", "Hong Chen", "Guohao Li", "Zhifan Feng", "Xin Wang", "Wenwu Zhu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=KFbRc2bvaJ
@inproceedings{ qu2024training, title={Training pansharpening networks at full resolution using degenerate invariance}, author={YiChang Qu and Bing Li and Jie Huang and Feng Zhao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=KFbRc2bvaJ} }
Pan-sharpening is an important technique for remote sensing imaging systems to obtain high resolution multispectral images. Existing deep learning-based methods mostly rely on using pseudo-groundtruth multi-spectral images for supervised learning. The whole training process only remains at the scale of reduced resolution, which means that the impact of the degradation process is ignored and high-quality images cannot be guaranteed at full resolution. To address the challenge, we propose a new unsupervised framework that does not rely on pseudo-groundtruth but uses the invariance of the degradation process to build a consistent loss function on the original scale for network training. Specifically, first, we introduce the operator learning method to build an exact mapping function from multi-spectral to panchromatic images and decouple spectral features and texture features. Then, through joint training, operators and convolutional networks can learn the spatial degradation process and spectral degradation process at full resolution, respectively. By introducing them to build consistency constraints, we can train the pansharpening network at the original full resolution. Our approach could be applied to existing pansharpening methods, improving their usability on original data, which is matched to practical application requirements. The experimental results on different kinds of satellite datasets demonstrate that the new network outperforms state-of-the-art methods both visually and quantitatively.
Training pansharpening networks at full resolution using degenerate invariance
[ "YiChang Qu", "Bing Li", "Jie Huang", "Feng Zhao" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=KBeJ7gxhCc
@inproceedings{ jin2024objectlevel, title={Object-Level Pseudo-3D Lifting for Distance-Aware Tracking}, author={Haoyuan Jin and Xuesong Nie and Yunfeng Yan and Xi Chen and Zhihang Zhu and Donglian Qi}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=KBeJ7gxhCc} }
Multi-object tracking (MOT) is a pivotal task for media interpretation, where reliable motion and appearance cues are essential for cross-frame identity preservation. However, limited by the inherent perspective properties of 2D space, the crowd density and frequent occlusions in real-world scenes expose the fragility of these cues. We observe the natural advantage of objects being well-separated in high-dimensional space and propose a novel 2D MOT framework, ``Detecting-Lifting-Tracking'' (DLT). Initially, a pre-trained detector is employed to capture 2D object information. Secondly, we introduce a Mamba Distance Estimator to obtain the distances of objects to a monocular camera with temporal consistency, achieving object-level pseudo-3D lifting. Finally, we thoroughly explore distance-aware tracking via pseudo-3D information. Specifically, we introduce a Score-Distance Hierarchical Matching and Short-Long Terms Association to enhance accurate and robust association capability. Even without appearance cues, our DLT achieves state-of-the-art performance on MOT17, MOT20, and DanceTrack, demonstrating its potential to address occlusion challenges.
Object-Level Pseudo-3D Lifting for Distance-Aware Tracking
[ "Haoyuan Jin", "Xuesong Nie", "Yunfeng Yan", "Xi Chen", "Zhihang Zhu", "Donglian Qi" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=K76Fq3oGDf
@inproceedings{ chen2024transferring, title={Transferring to Real-World Layouts: A Depth-aware Framework for Scene Adaptation}, author={Mu Chen and Zhedong Zheng and Yi Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=K76Fq3oGDf} }
Scene segmentation via unsupervised domain adaptation (UDA) enables the transfer of knowledge acquired from source synthetic data to real-world target data, which largely reduces the need for manual pixel-level annotations in the target domain. To facilitate domain-invariant feature learning, existing methods typically mix data from both the source domain and target domain by simply copying and pasting the pixels. Such vanilla methods are usually sub-optimal since they do not take into account how well the mixed layouts correspond to real-world scenarios. Real-world scenarios are with an inherent layout. We observe that semantic categories, such as sidewalks, buildings, and sky, display relatively consistent depth distributions, and could be clearly distinguished in a depth map. Based on such observation, we propose a depth-aware framework to explicitly leverage depth estimation to mix the categories and facilitate the two complementary tasks, i.e., segmentation and depth learning in an end-to-end manner. In particular, the framework contains a Depth-guided Contextual Filter (DCF) forndata augmentation and a cross-task encoder for contextual learning. DCF simulates the real-world layouts, while the cross-task encoder further adaptively fuses the complementing features between two tasks. Besides, it is worth noting that several public datasets do not provide depth annotation. Therefore, we leverage the off-the-shelf depth estimation network to generate the pseudo depth. Extensive experiments show that our proposed methods, even with pseudo depth, achieve competitive performance on two widely-used bench-marks, i.e. 77.7 mIoU on GTA→Cityscapes and 69.3 mIoU on Synthia→Cityscapes.
Transferring to Real-World Layouts: A Depth-aware Framework for Scene Adaptation
[ "Mu Chen", "Zhedong Zheng", "Yi Yang" ]
Conference
oral
2311.12682
[ "https://github.com/chen742/DCF" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=K5Lrj6iTSO
@inproceedings{ sun2024caterpillar, title={Caterpillar: A Pure-{MLP} Architecture with Shifted-Pillars-Concatenation}, author={Jin Sun and Xiaoshuang Shi and Zhiyuan Wang and Kaidi Xu and Heng Tao Shen and Xiaofeng Zhu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=K5Lrj6iTSO} }
Modeling in Computer Vision has evolved to MLPs. Vision MLPs naturally lack local modeling capability, to which the simplest treatment is combined with convolutional layers. Convolution, famous for its sliding window scheme, also suffers from this scheme of redundancy and lower parallel computation. In this paper, we seek to dispense with the windowing scheme and introduce a more elaborate and parallelizable method to exploit locality. To this end, we propose a new MLP module, namely Shifted-Pillars-Concatenation (SPC), that consists of two steps of processes: (1) Pillars-Shift, which generates four neighboring maps by shifting the input image along four directions, and (2) Pillars-Concatenation, which applies linear transformations and concatenation on the maps to aggregate local features. SPC module offers superior local modeling power and performance gains, making it a promising alternative to the convolutional layer. Then, we build a pure-MLP architecture called Caterpillar by replacing the convolutional layer with the SPC module in a hybrid model of sMLPNet. Extensive experiments show Caterpillar's excellent performance on both small-scale and ImageNet-1k classification benchmarks, with remarkable scalability and transfer capability possessed as well. The code is available at https://github.com/sunjin19126/Caterpillar.
Caterpillar: A Pure-MLP Architecture with Shifted-Pillars-Concatenation
[ "Jin Sun", "Xiaoshuang Shi", "Zhiyuan Wang", "Kaidi Xu", "Heng Tao Shen", "Xiaofeng Zhu" ]
Conference
poster
2305.17644
[ "https://github.com/sunjin19126/caterpillar" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=JyOGUqYrbV
@inproceedings{ zhu2024do, title={Do {LLM}s Understand Visual Anomalies? Uncovering {LLM}'s Capabilities in Zero-shot Anomaly Detection}, author={Jiaqi Zhu and Shaofeng Cai and Fang Deng and WuJunran}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=JyOGUqYrbV} }
Large vision-language models (LVLMs) are markedly proficient in deriving visual representations guided by natural language. Recent explorations have utilized LVLMs to tackle zero-shot visual anomaly detection (VAD) challenges by pairing images with textual descriptions indicative of normal and abnormal conditions, referred to as anomaly prompts. However, existing approaches depend on static anomaly prompts that are prone to cross-semantic ambiguity, and prioritize global image-level representations over crucial local pixel-level image-to-text alignment that is necessary for accurate anomaly localization. In this paper, we present ALFA, a training-free approach designed to address these challenges via a unified model. We propose a run-time prompt adaptation strategy, which first generates informative anomaly prompts to leverage the capabilities of a large language model (LLM). This strategy is enhanced by a contextual scoring mechanism for per-image anomaly prompt adaptation and cross-semantic ambiguity mitigation. We further introduce a novel fine-grained aligner to fuse local pixel-level semantics for precise anomaly localization, by projecting the image-text alignment from global to local semantic spaces. Extensive evaluations on the challenging MVTec and VisA datasets confirm ALFA's effectiveness in harnessing the language potential for zero-shot VAD, achieving significant PRO improvements of 12.1% on MVTec AD and 8.9% on VisA compared to state-of-the-art zero-shot VAD approaches.
Do LLMs Understand Visual Anomalies? Uncovering LLM's Capabilities in Zero-shot Anomaly Detection
[ "Jiaqi Zhu", "Shaofeng Cai", "Fang Deng", "WuJunran" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=JxZlNQy08K
@inproceedings{ liu2024multimodal, title={Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive Learning}, author={Xinwei Liu and Xiaojun Jia and Yuan Xun and Siyuan Liang and Xiaochun Cao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=JxZlNQy08K} }
Multimodal contrastive learning (MCL) has shown remarkable advances in zero-shot classification by learning from millions of image-caption pairs crawled from the Internet. However, this reliance poses privacy risks, as hackers may unauthorizedly exploit image-text data for model training, potentially including personal and privacy-sensitive information. Recent works propose generating unlearnable examples by adding imperceptible perturbations to training images to build shortcuts for protection. However, they are designed for unimodal classification, which remains largely unexplored in MCL. We first explore this context by evaluating the performance of existing methods on image-caption pairs, and they fail to effectively build shortcuts due to the lack of labels and the dispersion of pairs in MCL. In this paper, we propose Multi-step Error Minimization (MEM), a novel optimization process for generating multimodal unlearnable examples. It extends the Error-Minimization (EM) framework to optimize both image noise and an additional text trigger, thereby enlarging the optimized space and effectively misleading the model to learn the shortcut between the noise features and the text trigger. Specifically, we adopt projected gradient descent to solve the noise minimization problem and use HotFlip to approximate the gradient and replace words to find the optimal text trigger. Extensive experiments demonstrate the effectiveness of MEM, with post-protection retrieval results nearly half of random guessing, and its high transferability across different models.
Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive Learning
[ "Xinwei Liu", "Xiaojun Jia", "Yuan Xun", "Siyuan Liang", "Xiaochun Cao" ]
Conference
poster
2407.16307
[ "https://github.com/thinwayliu/multimodal-unlearnable-examples" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=JnXR70w5Go
@inproceedings{ wang2024gpdvvto, title={{GPD}-{VVTO}: Preserving Garment Details in Video Virtual Try-On}, author={Yuanbin Wang and Weilun Dai and Long Chan and Huanyu Zhou and Aixi Zhang and Si Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=JnXR70w5Go} }
Video Virtual Try-On aims to transfer a garment onto a person in the video. Previous methods typically focus on image-based virtual try-on, but directly applying these methods to videos often leads to temporal discontinuity due to inconsistencies between frames. Limited attempts in video virtual try-on also suffer from unrealistic results and poor generalization ability. In light of previous research, we posit that the task of video virtual try-on can be decomposed into two key aspects: (1) single-frame results are realistic and natural, while retaining consistency with the garment; (2) the person's actions and the garment are coherent throughout the entire video. To address these two aspects, we propose a novel two-stage framework based on Latent Diffusion Model, namely Garment-Preserving Diffusion for Video Virtual Try-On (GPD-VVTO). In the first stage, the model is trained on single-frame data to improve the ability of generating high-quality try-on images. We integrate both low-level texture features and high-level semantic features of the garment into the denoising network to preserve garment details while ensuring a natural fit between the garment and the person. In the second stage, the model is trained on video data to enhance temporal consistency. We devise a novel Garment-aware Temporal Attention (GTA) module that incorporates garment features into temporal attention, enabling the model to maintain the fidelity to the garment during temporal modeling. Furthermore, we collect a video virtual try-on dataset containing high-resolution videos from diverse scenes, addressing the limited variety of current datasets in terms of video background and human actions. Extensive experiments demonstrate that our method outperforms existing state-of-the-art methods in both image-based and video-based virtual try-on tasks, indicating the effectiveness of our proposed framework.
GPD-VVTO: Preserving Garment Details in Video Virtual Try-On
[ "Yuanbin Wang", "Weilun Dai", "Long Chan", "Huanyu Zhou", "Aixi Zhang", "Si Liu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=JnG3wI5E5P
@inproceedings{ ren2024heterophilic, title={Heterophilic Graph Invariant Learning for Out-of-Distribution of Fraud Detection}, author={Lingfei Ren and Ruimin Hu and Zheng Wang and Yilin Xiao and Dengshi Li and Junhang Wu and Jinzhang Hu and Yilong Zang and Zijun Huang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=JnG3wI5E5P} }
Graph-based fraud detection (GFD) has garnered increasing attention due to its effectiveness in identifying fraudsters within multimedia data such as online transactions, product reviews, or telephone voices. However, the prevalent in-distribution (ID) assumption significantly impedes the generalization of GFD approaches to out-of-distribution (OOD) scenarios, which is a pervasive challenge considering the dynamic nature of fraudulent activities. In this paper, we introduce the Heterophilic Graph Invariant Learning Framework (HGIF), a novel approach to bolster the OOD generalization of GFD. HGIF addresses two pivotal challenges: creating diverse virtual training environments and adapting to varying target distributions. Leveraging edge-aware augmentation, HGIF efficiently generates multiple virtual training environments characterized by generalized heterophily distributions, thereby facilitating robust generalization against fraud graphs with diverse heterophily degrees. Moreover, HGIF employs a shared dual-channel encoder with heterophilic graph contrastive learning, enabling the model to acquire stable high-pass and low-pass node representations during training. During the Test-time Training phase, the shared dual-channel encoder is flexibly fine-tuned to adapt to the test distribution through graph contrastive learning. Extensive experiments showcase HGIF's superior performance over existing methods in OOD generalization, setting a new benchmark for GFD in OOD scenarios.
Heterophilic Graph Invariant Learning for Out-of-Distribution of Fraud Detection
[ "Lingfei Ren", "Ruimin Hu", "Zheng Wang", "Yilin Xiao", "Dengshi Li", "Junhang Wu", "Jinzhang Hu", "Yilong Zang", "Zijun Huang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=JlwOhP3WRj
@inproceedings{ wang2024textgaze, title={TextGaze: Gaze-Controllable Face Generation with Natural Language}, author={Hengfei Wang and Zhongqun Zhang and Yihua Cheng and Hyung Jin Chang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=JlwOhP3WRj} }
Generating face image with specific gaze information has attracted considerable attention. Existing approaches typically input gaze values directly for face generation, which is unnatural and requires annotated gaze datasets for training, thereby limiting its application. In this paper, we present a novel gaze-controllable face generation task. Our approach inputs textual descriptions that describe human gaze and head behavior and generates corresponding face images. Our work first introduces a text-of-gaze dataset containing over 90k text descriptions spanning a dense distribution of gaze and head poses. We further propose a gaze-controllable text-to-face method. Our method contains a sketch-conditioned face diffusion module and a model-based sketch diffusion module. We define a face sketch based on facial landmarks and eye segmentation map. The face diffusion module generates face images from the face sketch, and the sketch diffusion module employs a 3D face model to generate face sketch from text description. Experiments on the FFHQ dataset show the effectiveness of our method. We will release our dataset and code for future research.
TextGaze: Gaze-Controllable Face Generation with Natural Language
[ "Hengfei Wang", "Zhongqun Zhang", "Yihua Cheng", "Hyung Jin Chang" ]
Conference
poster
2404.17486
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=JhGk5nTSRf
@inproceedings{ luo2024engaging, title={Engaging Live Video Comments Generation}, author={Ge Luo and Yuchen Ma and Manman Zhang and Junqiang Huang and Sheng Li and Zhenxing Qian and Xinpeng Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=JhGk5nTSRf} }
Automatic live commenting is increasingly acknowledged as a crucial strategy for improving viewer interaction. However, current methods overlook the significance of creating engaging comments. Engaging comments can not only attract viewers' widespread attention, earning numerous "likes", but also further promote subsequent social comment interactions. In this paper, we introduce a novel framework for generating engaging live video comments, aiming to resonate with viewers and enhance the viewing experience. Then, we design a Competitive Context Selection Strategy to accelerate differential learning by constructing relatively attention sample pairs with different levels of attractiveness. This approach addresses the sample imbalance problem between highly-liked and low-liked comments, as well as the relative attractiveness issue of comments within video scenes. Moreover, we develop a Semantic Gap Contrastive Loss to minimize the distance between generated comments and higher-liked comments within the segment, while also widening the gap with lower-liked or unliked comments. This loss function helps the model to generate more engaging comments. To support our proposed generation task, we construct a video comment dataset with "like" information, containing 180,000 comments and their "like" counts. Extensive experiments indicate that the comments generated by our method are highly engaging, more fluent, natural, and diverse compared to baselines.
Engaging Live Video Comments Generation
[ "Ge Luo", "Yuchen Ma", "Manman Zhang", "Junqiang Huang", "Sheng Li", "Zhenxing Qian", "Xinpeng Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=JaIM0fUa8o
@inproceedings{ chen2024hypergraphguided, title={Hypergraph-guided Intra- and Inter-category Relation Modeling for Fine-grained Visual Recognition}, author={Lu Chen and Qiangchang Wang and Zhaohui Li and Yilong Yin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=JaIM0fUa8o} }
Fine-grained Visual Recognition (FGVR) aims to distinguish objects within similar subcategories. Humans adeptly perform this challenging task by leveraging both intra-category distinctiveness and inter-category similarity. However, previous methods failed to combine these two complementary dimensions and mine the intrinsic interrelationship among various semantic features. To address the above limitations, we propose HI2R, a Hypergraph-guided Intra- and Inter-category Relation Modeling approach, which simultaneously extracts the intra-category structural information and inter-category relation information for more precise reasoning. Specifically, we exploit a Hypergraph-guided Structure Learning (HSL) module, which employs hypergraphs to capture high-order structural relations, transcending traditional graph-based methods that are limited to pairwise linkages. This advancement allows the model to adapt to significant intra-category variations. Additionally, we propose an Inter-category Relation Perception (IRP) module to improve feature discrimination across categories by extracting and analyzing semantic relations among them. Our objective is to alleviate the robustness issue associated with exclusive reliance on intra-category discriminative features. Furthermore, a random semantic consistency loss is introduced to direct the model's attention to commonly overlooked yet distinctive regions, which indirectly enhances the representation ability of both HSL and IRP modules. Both qualitative and quantitative results demonstrate the effectiveness and usefulness of our proposed HI2R model.
Hypergraph-guided Intra- and Inter-category Relation Modeling for Fine-grained Visual Recognition
[ "Lu Chen", "Qiangchang Wang", "Zhaohui Li", "Yilong Yin" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=JW3oRKrefC
@inproceedings{ chen2024mvpnet, title={{MVP}-Net: Multi-View Depth Image Guided Cross-Modal Distillation Network for Point Cloud Upsampling}, author={jiade chen and Jin Wang and Yunhui Shi and Nam Ling and Baocai Yin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=JW3oRKrefC} }
Point cloud upsampling concerns producing a dense and uniform point set from a sparse and irregular one. Current upsampling methods primarily encounter two challenges: (i) insufficient uni-modal representations of sparse point clouds, and (ii) inaccurate estimation of geometric details in dense point clouds, resulting in suboptimal upsampling results. To tackle these challenges, we propose MVP-Net, a multi-view depth image guided cross-modal detail estimation distillation network for point cloud upsampling, in which the multi-view depth images of point clouds are fully explored to guide upsampling. Firstly, we propose a cross-modal feature extraction module, consisting of two branches designed to extract point features and depth image features separately. This setup aims to produce sufficient cross-modal representations of sparse point clouds. Subsequently, we design a Multi-View Depth Image to Point Feature Fusion (MVP) block to fuse the cross-modal features in a fine-grained and hierarchical manner. The MVP block is incorporated into the feature extraction module. Finally, we introduce a paradigm for multi-view depth image-guided detail estimation and distillation. The teacher network fully utilizes paired multi-view depth images of sparse point clouds and their dense counterparts to formulate multi-hierarchical representations of geometric details, thereby achieving high-fidelity reconstruction. Meanwhile, the student network takes only sparse point clouds and their multi-view depth images as input, and it learns to predict the multi-hierarchical detail representations distilled from the teacher network. Extensive qualitative and quantitative results on both synthetic and real-world datasets demonstrate that our method outperforms state-of-the-art point cloud upsampling methods.
MVP-Net: Multi-View Depth Image Guided Cross-Modal Distillation Network for Point Cloud Upsampling
[ "jiade chen", "Jin Wang", "Yunhui Shi", "Nam Ling", "Baocai Yin" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=JUZXOKPhXd
@inproceedings{ liu2024deeply, title={Deeply Fusing Semantics and Interactions for Item Representation Learning via Topology-driven Pre-training}, author={Shiqin Liu and Chaozhuo Li and Xi Zhang and Minjun Zhao and yuanbo xu and Jiajun Bu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=JUZXOKPhXd} }
Learning item representation is crucial for a myriad of on-line e-commerce applications. The nucleus of retail item representation learning is how to properly fuse the semantics within a single item, and the interactions across different items generated by user behaviors (e.g., co-click or co-view). Product semantics depict the intrinsic characteristics of the item, while the interactions describe the relationships between items from the perspective of human perception. Existing approaches either solely rely on a single type of information or loosely couple them together, leading to hindered representations. In this work, we propose a novel model named TESPA to reinforce semantic modeling and interaction modeling mutually. Specifically, collaborative filtering signals in the interaction graph are encoded into the language models through fine-grained topological pre-training, and the interaction graph is further enriched based on semantic similarities. After that, a novel multi-channel co-training paradigm is proposed to deeply fuse the semantics and interactions under a unified framework. In a nutshell, TESPA is capable of enjoying the merits of both sides to facilitate item representation learning. Experimental results of on-line and off-line evaluations demonstrate the superiority of our proposal.
Deeply Fusing Semantics and Interactions for Item Representation Learning via Topology-driven Pre-training
[ "Shiqin Liu", "Chaozhuo Li", "Xi Zhang", "Minjun Zhao", "yuanbo xu", "Jiajun Bu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=JMSPDTMgX8
@inproceedings{ liao2024crash, title={{CRASH}: Crash Recognition and Anticipation System Harnessing with Context-Aware and Temporal Focus Attentions}, author={Haicheng Liao and Haoyu Sun and Zhenning Li and Huanming Shen and Chengyue Wang and KaHou Tam and Chunlin Tian and Li Li and Cheng-zhong Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=JMSPDTMgX8} }
Accurately and promptly predicting accidents among surrounding traffic agents from camera footage is crucial for the safety of autonomous vehicles (AVs). This task presents substantial challenges stemming from the unpredictable nature of traffic accidents, their long-tail distribution, the intricacies of traffic scene dynamics, and the inherently constrained field of vision of onboard cameras. To address these challenges, this study introduces a novel accident anticipation framework for AVs, termed CRASH. It seamlessly integrates five components: object detector, feature extractor, object-aware module, context-aware module, and multi-layer fusion. Specifically, we develop the object-aware module to prioritize high-risk objects in complex and ambiguous environments by calculating the spatial-temporal relationships between traffic agents. In parallel, the context-aware is also devised to extend global visual information from the temporal to the frequency domain using the Fast Fourier Transform (FFT) and capture fine-grained visual features of potential objects and broader context cues within traffic scenes. To capture a wider range of visual cues, we further propose a multi-layer fusion that dynamically computes the temporal dependencies between different scenes and iteratively updates the correlations between different visual features for accurate and timely accident prediction. Evaluated on real-world datasets—Dashcam Accident Dataset (DAD), Car Crash Dataset (CCD), and AnAn Accident Detection (A3D) datasets—our model surpasses existing top baselines in critical evaluation metrics like Average Precision (AP) and mean Time-To-Accident (mTTA). Importantly, its robustness and adaptability are particularly evident in challenging driving scenarios with missing or limited training data, demonstrating significant potential for application in real-world autonomous driving systems.
CRASH: Crash Recognition and Anticipation System Harnessing with Context-Aware and Temporal Focus Attentions
[ "Haicheng Liao", "Haoyu Sun", "Zhenning Li", "Huanming Shen", "Chengyue Wang", "KaHou Tam", "Chunlin Tian", "Li Li", "Cheng-zhong Xu" ]
Conference
poster
2407.17757
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=JMMrXtBN3M
@inproceedings{ wang2024proactive, title={Proactive Deepfake Detection via Training-Free Landmark Perceptual Watermarks}, author={Tianyi Wang and Mengxiao Huang and Harry Cheng and Xiao Zhang and Zhiqi Shen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=JMMrXtBN3M} }
The Deepfake face manipulation technique has garnered significant public attention due to its impacts on both enhancing human experiences and posing security and privacy threats. Despite numerous passive Deepfake detection algorithms that have been attempted to thwart malicious Deepfake attacks, they mostly struggle with the generalizability challenge when confronted with hyper-realistic synthetic facial images contemporarily. To tackle the problem, this paper proposes a proactive Deepfake detection approach by introducing a novel training-free landmark perceptual watermark, LampMark for short. Firstly, we analyze the structure-sensitive characteristics of Deepfake manipulations and devise a secure and confidential transformation pipeline from the structural representations, i.e. facial landmarks, to binary landmark perceptual watermarks. Subsequently, we present an end-to-end watermarking framework that robustly and imperceptibly embeds and extracts watermarks concerning the images to be protected. Relying on promising watermark recovery accuracies, Deepfake detection is accomplished by assessing the consistency between the content-matched landmark perceptual watermark and the robustly recovered watermark of the suspect Deepfake image. Experimental results demonstrate the superior performance of our approach in watermark recovery and Deepfake detection compared to state-of-the-art methods across in-dataset, cross-dataset, and cross-manipulation scenarios.
Proactive Deepfake Detection via Training-Free Landmark Perceptual Watermarks
[ "Tianyi Wang", "Mengxiao Huang", "Harry Cheng", "Xiao Zhang", "Zhiqi Shen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=JDiR6JYHZw
@inproceedings{ zheng2024diversity, title={Diversity Matters: User-Centric Multi-Interest Learning for Conversational Movie Recommendation}, author={Yongsen Zheng and Guohua Wang and Yang Liu and Liang Lin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=JDiR6JYHZw} }
Diversity plays a crucial role in Recommender Systems (RSs) as it ensures a wide range of recommended items, providing users with access to new and varied options. Without diversity, users often encounter repetitive content, limiting their exposure to novel choices. While significant efforts have been dedicated to enhancing recommendation diversification in static offline scenarios, relatively less attention has been given to online Conversational Recommender Systems (CRSs). However, the lack of recommendation diversity in CRSs will increasingly exacerbate over time due to the dynamic user-system feedback loop, resulting in challenges such as the Matthew effect, filter bubbles, and echo chambers. To address these issues, we propose an innovative end-to-end CRS paradigm called User-Centric Multi-Interest Learning for Conversational Movie Recommendation (CoMoRec), which aims to learn user interests from multiple perspectives to enhance result diversity as users engage in natural language conversations for movie recommendations. Firstly, CoMoRec automatically models various facets of user interests, including context-based, graph-based, and review-based interests, to explore a wide range of user intentions and preferences. Then, it leverages these multi-aspect user interests to accurately predict personalized and diverse movie recommendations and generate fluent and informative responses during conversations. Through extensive experiments conducted on two publicly available CRS-based movie datasets, our proposed CoMoRec achieves a new state-of-the-art performance and outperforms all the compared baselines in terms of improving recommendation diversity in the CRS.
Diversity Matters: User-Centric Multi-Interest Learning for Conversational Movie Recommendation
[ "Yongsen Zheng", "Guohua Wang", "Yang Liu", "Liang Lin" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=JAM4PA7WnE
@inproceedings{ xie2024trajformer, title={Traj2Former: A Local Context-aware Snapshot and Sequential Dual Fusion Transformer for Trajectory Classification}, author={Yuan Xie and Yichen Zhang and Yifang Yin and SHENG ZHANG and Ying Zhang and Rajiv Ratn Shah and Roger Zimmermann and Guoqing Xiao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=JAM4PA7WnE} }
The wide use of mobile devices has led to a proliferated creation of extensive trajectory data, rendering trajectory classification increasingly vital and challenging for downstream applications. Existing deep learning methods offer powerful feature extraction capabilities to detect nuanced variances in trajectory classification tasks. However, their effectiveness remains compromised by the following two unsolved challenges. First, identifying the distribution of nearby trajectories based on noisy and sparse GPS coordinates poses a significant challenge, providing critical contextual features to the classification. Second, though efforts have been made to incorporate a shape feature by rendering trajectories into images, they fail to model the local correspondence between GPS points and image pixels. To address these issues, we propose a novel model termed Traj2Former to spotlight the spatial distribution of the adjacent trajectory points (\emph{i.e.}, contextual snapshot) and enhance the snapshot fusion between the trajectory data and the corresponding spatial contexts. We propose a new GPS rendering method to generate contextual snapshots, but it can be applied from a trajectory database to a digital map. Moreover, to capture diverse temporal patterns, we conduct a multi-scale sequential fusion by compressing the trajectory data with differing rates. Extensive experiments have been conducted to verify the superiority of the Traj2Former model.
Traj2Former: A Local Context-aware Snapshot and Sequential Dual Fusion Transformer for Trajectory Classification
[ "Yuan Xie", "Yichen Zhang", "Yifang Yin", "SHENG ZHANG", "Ying Zhang", "Rajiv Ratn Shah", "Roger Zimmermann", "Guoqing Xiao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=J6c0sRkWop
@inproceedings{ li2024multimodal, title={Multimodal Inplace Prompt Tuning for Open-set Object Detection}, author={Guilin Li and Mengdan Zhang and Xiawu Zheng and Peixian Chen and Zihan Wang and Yunhang Shen and Mingchen Zhuge and Chenglin Wu and Fei Chao and Ke Li and Xing Sun and Rongrong Ji}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=J6c0sRkWop} }
The integration of large language models into open-world detection frameworks significantly improves versatility in new environments. Prompt representations derived from these models help establish classification boundaries for both base and novel categories within open-world detectors. However, we are the first to discover that directly fine-tuning language models in detection systems results in redundant attention patterns and leads to suboptimal prompt representations. In order to fully leverage the capabilities of large language models and augment prompt encoding for detection, this study introduces a redundancy assessment metric to identify uniform attention patterns. Furthermore, in areas with high redundancy, we incorporate multimodal inplace prompt tuning (MIPT) to enrich the text prompt with visual clues. Experimental results validate the efficacy of our MIPT framework, achieving a notable increase across benchmarks, e.g. elevating GLIP-L from 22.6% to 25.0% on ODinW-35, and 9.0% improvement on LVIS.
Multimodal Inplace Prompt Tuning for Open-set Object Detection
[ "Guilin Li", "Mengdan Zhang", "Xiawu Zheng", "Peixian Chen", "Zihan Wang", "Yunhang Shen", "Mingchen Zhuge", "Chenglin Wu", "Fei Chao", "Ke Li", "Xing Sun", "Rongrong Ji" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=J3mF5Ea5JG
@inproceedings{ cheng2024stylizedfacepoint, title={StylizedFacePoint: Facial Landmark Detection for Stylized Characters}, author={Shengran Cheng and Chuhang Ma and Ye Pan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=J3mF5Ea5JG} }
Facial landmark detection forms the foundation for numerous face-related tasks. Recently, this field has gained substantial attention and made significant advancements. Nonetheless, detecting facial landmarks for stylized characters still remains a challenge. Existing approaches, which are mostly trained on real-human face datasets, struggle to perform well due to the structural variations between real and stylized characters. Additionally, a comprehensive dataset for analyzing stylized characters' facial features is lacking. This study proposes a novel dataset, the Facial Landmark Dataset for Stylized Characters (FLSC), which contains 2674 images and 4086 faces. These data is selected from 16 cartoon video clips, together with 98 landmarks per image, labeled by professionals. Besides, we propose StylizedFacePoint: a deep-learning-based method for stylized facial landmark detection that outperforms the existing approaches. This method has also proven to work well for characters with styles outside the training domain. Moreover, we outline two primary types of applications for our dataset and method. For each, we provide a detailed illustrative example.
StylizedFacePoint: Facial Landmark Detection for Stylized Characters
[ "Shengran Cheng", "Chuhang Ma", "Ye Pan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=IxSKhO7ed6
@inproceedings{ zhao2024pean, title={{PEAN}: A Diffusion-Based Prior-Enhanced Attention Network for Scene Text Image Super-Resolution}, author={Zuoyan Zhao and Hui Xue and Pengfei Fang and Shipeng Zhu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=IxSKhO7ed6} }
Scene text image super-resolution (STISR) aims at simultaneously increasing the resolution and readability of low-resolution scene text images, thus boosting the performance of the downstream recognition task. Two factors in scene text images, visual structure and semantic information, affect the recognition performance significantly. To mitigate the effects from these factors, this paper proposes a Prior-Enhanced Attention Network (PEAN). Specifically, an attention-based modulation module is leveraged to understand scene text images by neatly perceiving the local and global dependence of images, despite the shape of the text. Meanwhile, a diffusion-based module is developed to enhance the text prior, hence offering better guidance for the SR network to generate SR images with higher semantic accuracy. Additionally, a multi-task learning paradigm is employed to optimize the network, enabling the model to generate legible SR images. As a result, PEAN establishes new SOTA results on the TextZoom benchmark. Experiments are also conducted to analyze the importance of the enhanced text prior as a means of improving the performance of the SR network. Code will be made available.
PEAN: A Diffusion-Based Prior-Enhanced Attention Network for Scene Text Image Super-Resolution
[ "Zuoyan Zhao", "Hui Xue", "Pengfei Fang", "Shipeng Zhu" ]
Conference
poster
2311.17955
[ "https://github.com/jdfxzzy/pean" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Il1bbeU2dY
@inproceedings{ shi2024integrating, title={Integrating Stickers into Multimodal Dialogue Summarization: A Novel Dataset and Approach for Enhancing Social Media Interaction}, author={Yuanchen Shi and Fang Kong}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Il1bbeU2dY} }
With the popularity of the internet and social media, growing number of online chats and comment replies are presented in the form of multimodal dialogues that contain stickers. Automatically summarizing these dialogues can effectively reduce content overload and save reading time. However, existing datasets and works are either unimodal text dialogue summarization, or articles with real photos that respectively perform text summaries and key image extraction, and have not simultaneously considered the multimodal dialogue automatic summarization tasks with sticker images and online chat scenarios. To compensate for the lack of datasets and researches in this field, we propose a brand-new Multimodal Chat Dialogue Summarization Containing Stickers (MCDSCS) task and dataset. It consists of 5,527 Chinese multimodal chat dialogues and 14,356 different sticker images, with each dialogue interspersed with stickers in the text to reflect the real social media chat scenario. MCDSCS can also contribute to filling the gap in Chinese multimodal dialogue data. We use the most advanced GPT4 model and carefully design Chain-of-Thoughts (COT) supplemented with manual review to generate dialogues and extract summaries. We also propose a novel method that integrates the visual information of stickers with the text descriptions of emotions and intentions (TEI). Experiments show that our method can effectively improve the performance of various mainstream summary generation models, even better than ChatGPT and some other multimodal models. Our data and code will be publicly available.
Integrating Stickers into Multimodal Dialogue Summarization: A Novel Dataset and Approach for Enhancing Social Media Interaction
[ "Yuanchen Shi", "Fang Kong" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=IkELeZGD2U
@inproceedings{ shengzhang2024information, title={Information Fusion with Knowledge Distillation for Fine-grained Remote Sensing Object Detection}, author={Shengzhang and Xi Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=IkELeZGD2U} }
Fine-grained remote sensing object detection aims to locate and identify specific targets with variable scale and orientation from complex background in the high-resolution and wide-swath images, which needs requirement of high precision and real-time processing simultaneously. Although traditional knowledge distillation technology show its effectiveness in model compression and accuracy preservation for natural images, the challenges of heavy background noise and intra-class similarity faced by remote sensing images limits the knowledge quality of teacher model and the learning ability of student model. To address these issues, we propose an Information Fusion with Knowledge Distillation (IFKD) method that enhances the student model's performance by integrating information from external images, frequency domain, and hyperbolic space. Firstly, we propose an external interference enhancement (EDE) module, which utilizes MobileSAM introducing information from external to enrich teachers' knowledge set, compete with teachers for the right to cultivate students, and weaken students' dependence on teachers. Secondly, to strengthen the representation of key features and improve the quality of knowledge, a frequency domain reconstruction (FDR) module is proposed, which is mainly performed by resampling the low-frequency background frequency to suppress the interference of background noise. Finally, aiming at the problem of intra-class similarity, hyperbolic similarity mask (HSM) module is designed to magnify intra-class differences and guide students to analyze teachers' knowledge based on the exponential growth of hyperbolic spatial ability. Experiments on the optical ShipRSImageNet and SAR Aircraft-1.0 datasets verify that the IFKD method significantly enhances performance in fine-grained recognition tasks compared to existing distillation techniques. Among them, 65.8% $AP_{50}$ can be improved by 2.6% on ShipRSImageNet dataset, and 81.4% $AP_{50}$ can be improved by 1.4% on SAR Aircraft-1.0.
Information Fusion with Knowledge Distillation for Fine-grained Remote Sensing Object Detection
[ "Shengzhang", "Xi Yang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=IjULktl95t
@inproceedings{ yuzhihuang2024psam, title={P{\textasciicircum}2{SAM}: Probabilistically Prompted {SAM}s Are Efficient Segmentator for Ambiguous Medical Images}, author={Yuzhihuang and Chenxin Li and ZiXu Lin and Hengyu Liu and haote xu and Yifan Liu and Yue Huang and Xinghao Ding and Xiaotong Tu and Yixuan Yuan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=IjULktl95t} }
The ability to predict multiple potential outputs for a single input can significantly address visual ambiguity, such as diverse semantic segmentation annotations for a medical image provided by different experts. Existing methods employ various advanced probabilistic modeling techniques to model the ambiguous prediction, while they often struggle to fit the underlying distribution for multiple outputs when only a limited number of ambiguously labeled data is available, which is usually the case in real-world applications. To overcome the challenges, we propose a framework that leverages the prior knowledge from foundation models during segmenting ambiguous objects., termed as P² SAM. We delve into an inherent disadvantage of SAM, i.e., the sensitivity of the output to prompts, and ingeniously transform it into an advantage on ambiguous segmentation in turn by introducing a prompt generation module. Experimental results demonstrate that by utilizing only a small number of doctor-annotated ambiguous samples, our strategy significantly enhances the precision and diversity for medical segmentation. In rigorous benchmarking experiments against cutting-edge methods, our method achieves increased segmentation precision and diversified outputs with even fewer training data (5.5% sample, +12% $D_{max}$). P² SAM signifies a steady step towards the practical deployment of probabilistic models in real-world data-limited scenarios.
P^2SAM: Probabilistically Prompted SAMs Are Efficient Segmentator for Ambiguous Medical Images
[ "Yuzhihuang", "Chenxin Li", "ZiXu Lin", "Hengyu Liu", "haote xu", "Yifan Liu", "Yue Huang", "Xinghao Ding", "Xiaotong Tu", "Yixuan Yuan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=IiDczMpPFn
@inproceedings{ yi2024aesstyler, title={AesStyler: Aesthetic Guided Universal Style Transfer}, author={Ran Yi and Haokun Zhu and Teng Hu and Yu-Kun Lai and Paul L Rosin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=IiDczMpPFn} }
Recent studies have shown impressive progress in universal style transfer which can integrate arbitrary styles into content images. However, existing approaches struggle with low aesthetics and disharmonious patterns in the final results. To address this problem, we propose AesStyler, a novel Aesthetic Guided Universal Style Transfer method. Specifically, our approach introduces the aesthetic assessment model, trained on a dataset with human-assessed aesthetic scores, into the universal style transfer task to accurately capture aesthetic features that universally resonate with human aesthetic preferences. Unlike previous methods which only consider aesthetics of specific style images, we propose to build a Universal Aesthetic Codebook (UAC) to harness universal aesthetic features that encapsulate the global aspects of aesthetics. Aesthetic features are fed into a novel Universal and Style-specific Aesthetic-Guided Attention (USAesA) module to guide the style transfer process. USAesA empowers our model to integrate the aesthetic attributes of both universal and style-specific aesthetic features with style features and facilitates the fusion of these aesthetically enhanced style features with content features. Extensive experiments and user studies have demonstrated that our approach generates aesthetically more harmonious and pleasing results than the state-of-the-art methods, both aesthetic-free and aesthetic-aware.
AesStyler: Aesthetic Guided Universal Style Transfer
[ "Ran Yi", "Haokun Zhu", "Teng Hu", "Yu-Kun Lai", "Paul L Rosin" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Ii5W5cQSMr
@inproceedings{ wang2024sustainable, title={Sustainable Self-evolution Adversarial Training}, author={Wenxuan Wang and Chenglei Wang and huihui Qi and Menghao Ye and Xuelin Qian and PENG WANG and Yanning Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Ii5W5cQSMr} }
With the wide application of deep neural network models in various computer vision tasks, there has been a proliferation of adversarial example generation strategies aimed at exploring model security deeply. However, existing adversarial training defense models, which rely on single or limited types of attacks under a one-time learning process, struggle to adapt to the dynamic and evolving nature of attack methods. Therefore, to achieve defense performance improvements for models in long-term applications, we propose a novel Sustainable Self-evolution Adversarial Training (SSEAT) framework. Specifically, we introduce a continual adversarial defense pipeline to realize learning from various kinds of adversarial examples across multiple stages. Additionally, to address the issue of model catastrophic forgetting caused by continual learning from ongoing novel attacks, we propose an adversarial data replay module to better select more diverse and key relearning data. Furthermore, we design a consistency regularization strategy to encourage current defense models to learn more from previously trained ones, guiding them to retain more past knowledge and maintain accuracy on clean samples. Extensive experiments have been conducted to verify the efficacy of the proposed SSEAT defense method, which demonstrates superior defense performance and classification accuracy compared to competitors.
Sustainable Self-evolution Adversarial Training
[ "Wenxuan Wang", "Chenglei Wang", "huihui Qi", "Menghao Ye", "Xuelin Qian", "PENG WANG", "Yanning Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=IgQ3xrYXTc
@inproceedings{ zhao2024dfmvc, title={{DFMVC}: Deep Fair Multi-view Clustering}, author={Bowen Zhao and Qianqian Wang and ZHIQIANG TAO and Wei Feng and Quanxue Gao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=IgQ3xrYXTc} }
Existing fair multi-view clustering methods impose a constraint that requires the distribution of sensitive attributes to be uniform within each cluster. However, this constraint can lead to misallocation of samples with sensitive attributes. To solve this problem, we propose a novel Deep Fair Multi-View Clustering (DFMVC) method that learns a consistent and discriminative representation instructed by a fairness constraint constructed from the distribution of clusters. Specifically, we incorporate contrastive constraints on semantic features from different views to obtain consistent and discriminative representations for each view. Additionally, we align the distribution of sensitive attributes with the target cluster distribution to achieve optimal fairness in clustering results. Experimental results on four datasets with sensitive attributes demonstrate that our method improves both the fairness and performance of clustering compared to state-of-the-art multi-view clustering methods.
DFMVC: Deep Fair Multi-view Clustering
[ "Bowen Zhao", "Qianqian Wang", "ZHIQIANG TAO", "Wei Feng", "Quanxue Gao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=IfPaymV4oz
@inproceedings{ liu2024colvo, title={Col{VO}: Colonoscopic Visual Odometry Considering Geometric and Photometric Consistency}, author={Ruyu Liu and Zhengzhe Liu and ZHANG HAOYU and Guodao Zhang and Jianhua Zhang and Bo Sun and Weiguo Sheng and Xiufeng Liu and Yaochu Jin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=IfPaymV4oz} }
Locating lesions is the primary goal of colonoscopy examinations. 3D perception techniques can enhance the accuracy of lesion localization by restoring 3D spatial information of the colon. However, existing methods focus on the local depth estimation of a single frame and neglect the precise global positioning of the colonoscope, thus failing to provide the accurate 3D location of lesions. The root causes of this shortfall are twofold: Firstly, existing methods treat colon depth and colonoscope pose estimation as independent tasks or design them as parallel sub-task branches. Secondly, the light source in the colon environment moves with the colonoscope, leading to brightness fluctuations among continuous frame images. To address these two issues, we propose ColVO, a novel deep learning-based Visual Odometry framework, which can continuously estimate colon depth and colonoscopic pose using two key components: a deep couple strategy for depth and pose estimation (DCDP) and a light consistent calibration mechanism (LCC). DCDP utilization of multimodal fusion and loss function constraints to couple depth and pose estimation modes ensures seamless alignment of geometric projections between consecutive frames. Meanwhile, LCC accounts for brightness variations by recalibrating the luminosity values of adjacent frames, enhancing ColVO's robustness. A comprehensive evaluation of ColVO on colon odometry benchmarks reveals its superiority over state-of-the-art methods in depth and pose estimation. We also demonstrate two valuable applications: immediate polyp localization and complete 3D reconstruction of the intestine. The code for ColVO is available at https://github.com/xxx/xxx.
ColVO: Colonoscopic Visual Odometry Considering Geometric and Photometric Consistency
[ "Ruyu Liu", "Zhengzhe Liu", "ZHANG HAOYU", "Guodao Zhang", "Jianhua Zhang", "Bo Sun", "Weiguo Sheng", "Xiufeng Liu", "Yaochu Jin" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0