bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
215
445
abstract
stringlengths
820
2.37k
title
stringlengths
24
147
authors
sequencelengths
1
13
id
stringclasses
1 value
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
33 values
n_linked_authors
int64
-1
4
upvotes
int64
-1
21
num_comments
int64
-1
4
n_authors
int64
-1
11
Models
sequencelengths
0
1
Datasets
sequencelengths
0
1
Spaces
sequencelengths
0
4
old_Models
sequencelengths
0
1
old_Datasets
sequencelengths
0
1
old_Spaces
sequencelengths
0
4
paper_page_exists_pre_conf
int64
0
1
null
https://openreview.net/forum?id=fLbdDspNW3
@inproceedings{ qiu2024learning, title={Learning Realistic Sketching: A Dual-agent Reinforcement Learning Approach}, author={Ji Qiu and Peng Lu and Xujun Peng and Wenhao Guo and Zhaoran Zhao and XiangTao Dong}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=fLbdDspNW3} }
This paper presents a pioneering method for teaching computer sketching that transforms input images into sequential, parameterized strokes. However, two challenges are raised for this sketching task: weak stimuli during stroke decomposition and maintaining semantic correctness, stylistic consistency, and detail integrity in the final drawings. To tackle the challenge of weak stimuli, our method incorporates an attention agent, which enhances the algorithm's sensitivity to subtle canvas changes by focusing on smaller, magnified areas. Moreover, in enhancing the perceived quality of drawing outcomes, we integrate a sketching style feature extractor to seamlessly capture semantic information and execute style adaptation, alongside a drawing agent that decomposes strokes under the guidance of the XDoG reward, thereby ensuring the integrity of sketch details. Based on dual intelligent agents, we have constructed an efficient sketching model. Experimental results attest to the superiority of our approach in both visual effects and perceptual metrics when compared to state-of-the-art techniques, confirming its efficacy in achieving realistic sketching.
Learning Realistic Sketching: A Dual-agent Reinforcement Learning Approach
[ "Ji Qiu", "Peng Lu", "Xujun Peng", "Wenhao Guo", "Zhaoran Zhao", "XiangTao Dong" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=fJqsaTHBgT
@inproceedings{ shen2024graph, title={Graph Convolutional Semi-Supervised Cross-Modal Hashing}, author={Xiaobo Shen and GaoyaoYu and YinFan Chen and Xichen Yang and Yuhui Zheng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=fJqsaTHBgT} }
Cross-modal hashing encodes different modalities of multi-modal data into a low-dimensional Hamming space for fast cross-modal retrieval. Most existing cross-modal hashing methods heavily rely on label semantics to boost retrieval performance; however, semantics are expensive to collect in real applications. To mitigate the heavy reliance on semantics, this work proposes a new semi-supervised deep cross-modal hashing method, namely, Graph Convolutional Semi-Supervised Cross-Modal Hashing (GCSCH), which is trained with limited label supervision. The proposed GCSCH first generates pseudo-multi-labels of the unlabeled samples using the simple yet effective idea of consistency regularization and pseudo-labeling. GCSCH designs a fusion network that merges the two modalities and employs Graph Convolutional Network (GCN) to capture semantic information among ground-truth-labeled and pseudo-labeled multi-modal data. Using the idea of knowledge distillation, GCSCH employs a teacher-student learning scheme that can successfully transfer knowledge from the fusion module to the image and text hashing networks. Empirical studies on three multi-modal benchmark datasets demonstrate the superiority of the proposed GCSCH over state-of-the-art cross-modal hashing methods with limited label supervision.
Graph Convolutional Semi-Supervised Cross-Modal Hashing
[ "Xiaobo Shen", "GaoyaoYu", "YinFan Chen", "Xichen Yang", "Yuhui Zheng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=fHXOPQ3aBf
@inproceedings{ gao2024fbsdiff, title={{FBSD}iff: Plug-and-Play Frequency Band Substitution of Diffusion Features for Highly Controllable Text-Driven Image Translation}, author={Xiang Gao and Jiaying Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=fHXOPQ3aBf} }
Large-scale text-to-image diffusion models have been a revolutionary milestone in the evolution of generative AI and multimodal technology, allowing extraordinary image generation with natural-language text prompts. However, the issue of lacking controllability of such models restricts their practical applicability for real-life content creation, for which attention has been focused on leveraging a reference image to control text-to-image synthesis. This paper contributes a concise and efficient approach that adapts the pre-trained text-to-image (T2I) diffusion model to the image-to-image (I2I) paradigm in a plug-and-play manner, realizing high-quality and versatile text-driven I2I translation without any model training, model fine-tuning, or online optimization. To guide T2I generation with a reference image, we propose to model diverse guiding factors with different frequency bands of diffusion features in DCT spectral space, and accordingly devise a novel frequency band substitution layer that dynamically substitutes a certain DCT frequency band of diffusion features with the corresponding counterpart of the reference image along the reverse sampling process. We demonstrate that our method flexibly enables highly controllable text-driven I2I translation both in the guiding factor and guiding intensity of the reference image, simply by adjusting the type and bandwidth of the substituted frequency band, respectively. Extensive experiments verify the superiority of our approach over related methods in image translation visual quality, versatility, and efficiency.
FBSDiff: Plug-and-Play Frequency Band Substitution of Diffusion Features for Highly Controllable Text-Driven Image Translation
[ "Xiang Gao", "Jiaying Liu" ]
Conference
poster
2408.00998
[ "https://github.com/xianggao1102/fbsdiff" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=fBeeQlkIM8
@inproceedings{ yin2024backdoor, title={Backdoor Attacks on Bimodal Salient Object Detection with {RGB}-Thermal Data}, author={Wen Yin and Bin Benjamin Zhu and Yulai Xie and Pan Zhou and Dan Feng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=fBeeQlkIM8} }
RGB-Thermal salient object detection (RGBT-SOD) plays a critical role in complex scene recognition fields such as autonomous driving, yet security research in this area remains limited. This paper introduces the first backdoor attack targeting RGBT-SOD, generating saliency maps on triggered inputs that depict non-existent salient objects chosen by the attacker, or designate no salient region (all black pixels) or the entire image as a salient region (all white pixels). We uncover that triggers possess an influence range for generating non-existent salient objects, supported by a theoretical approximation provided in this study. Extensive experimental evaluations validate the efficacy of our attack in both digital domain and physical-world scenarios. Notably, our dual-modality backdoor attack achieves an Attack Success Rate (ASR) of 86.72% with only 5 pairs of images in model training. Despite exploring potential countermeasures, we find them ineffective in thwarting our attacks, underscoring the urgent need for robust defenses against sophisticated backdoor attacks in RGBT-SOD systems.
Backdoor Attacks on Bimodal Salient Object Detection with RGB-Thermal Data
[ "Wen Yin", "Bin Benjamin Zhu", "Yulai Xie", "Pan Zhou", "Dan Feng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=f6K8Dj8uBT
@inproceedings{ shen2024balanced, title={Balanced Multi-Relational Graph Clustering}, author={Zhixiang Shen and Haolan He and zhao kang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=f6K8Dj8uBT} }
Multi-relational graph clustering has demonstrated remarkable success in uncovering underlying patterns in complex networks. Representative methods manage to align different views motivated by advances in contrastive learning. Our empirical study finds the pervasive presence of imbalance in real-world graphs, which is in principle contradictory to the motivation of alignment. In this paper, we first propose a novel metric, the Aggregation Class Distance, to empirically quantify structural disparities among different graphs. To address the challenge of view imbalance, we propose Balanced Multi-Relational Graph Clustering (BMGC), comprising unsupervised dominant view mining and dual signals guided representation learning. It dynamically mines the dominant view throughout the training process, synergistically improving clustering performance with representation learning. Theoretical analysis ensures the effectiveness of dominant view mining. Extensive experiments and in-depth analysis on real-world and synthetic datasets showcase that BMGC achieves state-of-the-art performance, underscoring its superiority in addressing the view imbalance inherent in multi-relational graphs. The source code and datasets are available at https://github.com/zxlearningdeep/BMGC.
Balanced Multi-Relational Graph Clustering
[ "Zhixiang Shen", "Haolan He", "zhao kang" ]
Conference
poster
2407.16863
[ "https://github.com/zxlearningdeep/bmgc" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=eyYcMjUtJ4
@inproceedings{ wang2024digging, title={Digging into contrastive learning for robust depth estimation with diffusion models}, author={JiYuan Wang and Chunyu Lin and Lang Nie and Kang Liao and Shuwei Shao and Yao Zhao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=eyYcMjUtJ4} }
Recently, diffusion-based depth estimation methods have drawn widespread attention due to their elegant denoising patterns and promising performance. However, they are typically unreliable under adverse conditions prevalent in real-world scenarios, such as rainy, snowy, etc. In this paper, we propose a novel robust depth estimation method with a customed contrastive learning mode for diffusion models, named D4RD, to resist performance degradation in complex environments. Concretely, we integrate the strength of knowledge distillation into contrastive learning, building the `trinity' contrastive scheme. It takes the sampled noise of the forward diffusion process as a natural reference and guides the predicted noise in different scenes to gather towards a more stable and precise optima. Meanwhile, we further extend noise-level trinity to more generic feature and image levels, building a multi-level contrast to distribute the burden of robust perception across the overall network. Moreover, before handling complex scenarios, we enhance the stability of the baseline diffusion model with three simple but effective improvements, which facilitate convergence and remove depth outliers. Extensive experiments show that D4RD achieves superior performance to existing state-of-the-art (SoTA) solutions on both synthetic corruption datasets and real-world weather conditions. The code will be available.
Digging into contrastive learning for robust depth estimation with diffusion models
[ "JiYuan Wang", "Chunyu Lin", "Lang Nie", "Kang Liao", "Shuwei Shao", "Yao Zhao" ]
Conference
poster
2404.09831
[ "https://github.com/wangjiyuan9/d4rd" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=erYyoNwKhJ
@inproceedings{ cheng2024diffusion, title={Diffusion Facial Forgery Detection}, author={Harry Cheng and Yangyang Guo and Tianyi Wang and Liqiang Nie and Mohan Kankanhalli}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=erYyoNwKhJ} }
Detecting diffusion-generated images has recently grown into an emerging research area. Existing diffusion-based datasets predominantly focus on general image generation. However, facial forgeries, which pose severe social risks, have remained less explored thus far. To address this gap, this paper introduces DiFF, a comprehensive dataset dedicated to face-focused diffusion-generated images. DiFF comprises over 500,000 images that are synthesized using thirteen distinct generation methods under four conditions. In particular, this dataset utilizes 30,000 carefully collected textual and visual prompts, ensuring the synthesis of images with both high fidelity and semantic consistency. We conduct extensive experiments on the DiFF dataset via human subject tests and several representative forgery detection methods. The results demonstrate that the binary detection accuracies of both human observers and automated detectors often fall below 30%, revealing insights on the challenges in detecting diffusion-generated facial forgeries. Moreover, our experiments demonstrate that DiFF, compared to previous facial forgery datasets, contains a more diverse and realistic range of forgeries, showcasing its potential to aid in the development of more generalized detectors. Finally, we propose an edge graph regularization approach to effectively enhance the generalization capability of existing detectors.
Diffusion Facial Forgery Detection
[ "Harry Cheng", "Yangyang Guo", "Tianyi Wang", "Liqiang Nie", "Mohan Kankanhalli" ]
Conference
poster
2401.15859
[ "https://github.com/xacheng1996/diff" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=epLGidFcqi
@inproceedings{ wang2024fedsls, title={Fed{SLS}: Exploring Federated Aggregation in Saliency Latent Space}, author={Hengyi Wang and Weiying Xie and Jitao Ma and DaixunLi and Yunsong Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=epLGidFcqi} }
Federated Learning (FL) is an emerging direction in distributed machine learning that enables jointly training a global model without sharing the data with server. However, data heterogeneity biases the parameter aggregation at the server, leading to slower convergence and poorer accuracy of the global model. To cope with this, most of the existing works involve enforcing regularization in local optimization or improving the model aggregation scheme at the server. Though effective, they lack a deep understanding of cross-client features. In this paper, we propose a saliency latent space feature aggregation method (FedSLS) across federated clients. By Guided BackPropagation (GBP), we transform deep models into powerful and flexible visual fidelity encoders, applicable to general state inputs across different image domains, and achieve powerful aggregation in the form of saliency latent features. Notably, since GBP is label-insensitive, it is sufficient to capture saliency features only once on each client. Experimental results demonstrate that FedSLS leads to significant improvements over the state-of-the-arts in terms of accuracies, especially in highly heterogeneous settings. For example, on CIFAR-10 dataset, FedSLS achieves 63.43% accuracy within the strongly heterogeneous environment α=0.05, which is 6% to 23% higher than the other baselines.
FedSLS: Exploring Federated Aggregation in Saliency Latent Space
[ "Hengyi Wang", "Weiying Xie", "Jitao Ma", "DaixunLi", "Yunsong Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=eouJdbzFNq
@inproceedings{ wang2024facialpulse, title={FacialPulse: An Efficient {RNN}-based Depression Detection via Temporal Facial Landmarks}, author={Ruiqi Wang and Jinyang Huang and Jie Zhang and Xin Liu and Xiang Zhang and Zhi Liu and Peng Zhao and Sigui Chen and Xiao Sun}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=eouJdbzFNq} }
Depression is a prevalent mental health disorder that significantly impacts individuals' lives and well-being. Early detection and intervention are crucial for effective treatment and management of depression. Recently, there are many end-to-end deep learning methods leveraging the facial expression features for automatic depression detection. However, most current methods overlook the temporal dynamics of facial expressions. Although very recent 3DCNN methods remedy this gap, they introduce more computational cost due to the selection of CNN-based backbones and redundant facial features. To address the above limitations, by considering the timing correlation of facial expressions, we propose a novel framework called FacialPulse, which recognizes depression with high accuracy and speed. By harnessing the bidirectional nature and proficiently addressing long-term dependencies, the Facial Motion Modeling Module (FMMM) is designed in FacialPulse to fully capture temporal features. Since the proposed FMMM has parallel processing capabilities and has the gate mechanism to mitigate gradient vanishing, this module can also significantly boost the training speed. Besides, to effectively use facial landmarks to replace original images to decrease information redundancy, a Facial Landmark Calibration Module (FLCM) is designed to eliminate facial landmark errors to further improve recognition accuracy. Extensive experiments on the AVEC2014 dataset and MMDA dataset (a depression dataset) demonstrate the superiority of FacialPulse on recognition accuracy and speed, with the average MAE (Mean Absolute Error) decreased by 22\%, and the recognition speed increased by 100\% compared to state-of-the-art baselines.
FacialPulse: An Efficient RNN-based Depression Detection via Temporal Facial Landmarks
[ "Ruiqi Wang", "Jinyang Huang", "Jie Zhang", "Xin Liu", "Xiang Zhang", "Zhi Liu", "Peng Zhao", "Sigui Chen", "Xiao Sun" ]
Conference
oral
2408.03499
[ "https://github.com/volatileee/facialpulse" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=eoaw2A8X4J
@inproceedings{ chen2024dpo, title={{DPO}: Dual-Perturbation Optimization for Test-time Adaptation in 3D Object Detection}, author={Zhuoxiao Chen and Zixin Wang and Yadan Luo and Sen Wang and Zi Huang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=eoaw2A8X4J} }
LiDAR-based 3D object detection has seen impressive advances in recent times. However, deploying trained 3D detectors in the real world often yields unsatisfactory performance when the distribution of the test data significantly deviates from the training data due to different weather conditions, object sizes, etc. A key factor in this performance degradation is the diminished generalizability of pre-trained models, which creates a sharp loss landscape during training. Such sharpness, when encountered during testing, can precipitate significant performance declines, even with minor data variations. To address the aforementioned challenges, we propose dual-perturbation optimization (DPO) for Test-time Adaptation in 3D Object Detection (TTA-3OD). We minimize the sharpness to cultivate a flat loss landscape to ensure model resiliency to minor data variations, thereby enhancing the generalization of the adaptation process. To fully capture the inherent variability of the test point clouds, we further introduce adversarial perturbation to the input BEV features to better simulate the noisy test environment. As the dual perturbation strategy relies on trustworthy supervision signals, we utilize a reliable Hungarian matcher to filter out pseudo-labels sensitive to perturbations. Additionally, we introduce early Hungarian cutoff to avoid error accumulation from incorrect pseudo-labels by halting the adaptation process. Extensive experiments across three types of transfer tasks demonstrate that the proposed DPO significantly surpasses previous state-of-the-art approaches, specifically on Waymo $\rightarrow$ KITTI, outperforming the most competitive baseline by 57.72\% in $\text{AP}_\text{3D}$ and reaching 91% of the fully supervised upper bound. Our code is available in the supplementary materials.
DPO: Dual-Perturbation Optimization for Test-time Adaptation in 3D Object Detection
[ "Zhuoxiao Chen", "Zixin Wang", "Yadan Luo", "Sen Wang", "Zi Huang" ]
Conference
poster
2406.13891
[ "https://github.com/jo-wang/dpo" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=elDjJp4Upl
@inproceedings{ zhou2024avhash, title={{AVH}ash: Joint Audio-Visual Hashing for Video Retrieval}, author={Yuxiang Zhou and Zhe Sun and Rui Liu and Yong Chen and Dell Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=elDjJp4Upl} }
Video hashing is a technique of encoding videos into binary vectors, facilitating efficient video storage and high-speed computation. Current approaches to video hashing predominantly utilize sequential frame images to produce semantic binary codes. However, videos encompass not only visual but also audio signals. Therefore, we propose a tri-level Transformer-based audio-visual hashing technique for video retrieval, named AVHash. It first processes audio and visual signals separately using pre-trained AST and ViT large models, and then projects temporal audio and keyframes into a shared latent semantic space using a Transformer encoder. Subsequently, a gated attention mechanism is designed to fuse the paired audio-visual signals in the video, followed by another Transformer encoder leading to the final video representation. The training of this AVHash model is directed by a video-based contrastive loss as well as a semantic alignment regularization term for audio-visual signals. Experimental results show that AVHash significantly outperforms existing video hashing methods in video retrieval tasks. Furthermore, ablation studies reveal that while video hashing based solely on visual signals achieves commendable mAP scores, the incorporation of audio signals can further boost its performance for video retrieval.
AVHash: Joint Audio-Visual Hashing for Video Retrieval
[ "Yuxiang Zhou", "Zhe Sun", "Rui Liu", "Yong Chen", "Dell Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=eivmdSj9u1
@inproceedings{ lu2024designing, title={Designing Spatial Visualization and Interactions of Immersive Sankey Diagram in Virtual Reality}, author={Yang Lu and junxianli and Zhitong Cui and Jiapeng Hu and Yanna Lin and Shijian Luo}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=eivmdSj9u1} }
Virtual reality (VR) is a revolutionary method of presenting data visualizations, which brings potential possibilities for enhancing analytical activities. However, applying this method to visualize complex data flows remains largely underexplored, especially the Sankey diagrams, which have an advantageous capacity to represent trends in data flows. In this work, we explored a novel design for the immersive Sankey diagram system within VR environments, utilizing a three-dimensional visual design and several interaction techniques that leveraged VR's spatial and immersive capabilities. Through two comparative user studies, we found the effectiveness of the VR Sankey diagram system in improving task performance and engagement and reducing cognitive workload in complex data analysis. We contribute to an interactive, immersive Sankey diagram system in VR environments, empirical evidence of its advantages, and design lessons for future immersive visualization tools.
Designing Spatial Visualization and Interactions of Immersive Sankey Diagram in Virtual Reality
[ "Yang Lu", "junxianli", "Zhitong Cui", "Jiapeng Hu", "Yanna Lin", "Shijian Luo" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=eiGs5VCsYM
@inproceedings{ zhang2024differentialperceptive, title={Differential-Perceptive and Retrieval-Augmented {MLLM} for Change Captioning}, author={Xian Zhang and Haokun Wen and Jianlong Wu and Pengda Qin and Hui Xue' and Liqiang Nie}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=eiGs5VCsYM} }
Change captioning involves describing the subtle changes between a pair of similar images. Although existing efforts have achieved compelling success, they overlook the potential of multimodal large language models (MLLMs) in tackling this challenging task. In this work, we aim to empower MLLMs with the capability to perceive subtle differences between paired images and enhance their performance in generating change captions. Specifically, we present a diFferentIal-perceptive aNd rEtRieval-augmented MLLM (FINER-MLLM) tailored for this task. In particular, FINER-MLLM leverages LoRA fine-tuned MLLM's image encoder to extract image patch features, enabling the capture of detailed image information. Subsequently, within MLLM's feature extraction, typically Q-Former, FINER-MLLM incorporates dual constraints: the intra-image feature independence constraint and the inter-image feature alignment constraint. These constraints ensure that the features can comprehensively extract subtle visual information within each image and that corresponding features across images align effectively.Last, we introduced the retrieval augmentation to first retrieve the relevant corpus to facilitate the MLLM's decoder \textit{i.e.}, LLM, in generating accurate change captions. Extensive experiments on three benchmark datasets, \textit{i.e.}, CLEVR-Change, Spot-the-Diff, and Image-Editing-Request, demonstrate the superiority of our proposed method.
Differential-Perceptive and Retrieval-Augmented MLLM for Change Captioning
[ "Xian Zhang", "Haokun Wen", "Jianlong Wu", "Pengda Qin", "Hui Xue'", "Liqiang Nie" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=efTur2naAS
@inproceedings{ liu2024attentive, title={Attentive Linguistic Tracking in Diffusion Models for Training-free Text-guided Image Editing}, author={Bingyan Liu and Chengyu Wang and Jun Huang and Kui Jia}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=efTur2naAS} }
Building on recent breakthroughs in diffusion-based text-to-image synthesis (TIS), training-free text-guided image editing (TIE) has become an indispensable aspect of modern image editing practices. It involves modifying the features in attention layers to alter objects or their attributes within images during the generation process. Yet, current image editing algorithms still present certain difficulties and challenges when it comes to editing multiple objects within an image. In this paper, we propose VICTORIA, a novel approach that enhances TIE by incorporating linguistic knowledge when manipulating attention maps during image generation. VICTORIA leverages components within self-attention layers to maintain spatial consistency between source and target images. Additionally, a novel loss function is designed to refine cross-attention maps, ensuring their alignment with linguistic constraints and enhancing the editing of multiple target entities. We also introduce a linguistic mask blending technique to improve the retention of information in areas exempt from modification. Experimental results across seven diverse datasets demonstrate that VICTORIA achieves substantial enhancements over state-of-the-art methods. This work highlights the critical role and effectiveness of linguistic analysis in boosting the performance of TIE.
Attentive Linguistic Tracking in Diffusion Models for Training-free Text-guided Image Editing
[ "Bingyan Liu", "Chengyu Wang", "Jun Huang", "Kui Jia" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=eaZlswPyKk
@inproceedings{ jiang2024dvf, title={{DVF}: Advancing Robust and Accurate Fine-Grained Image Retrieval with Retrieval Guidelines}, author={Xin Jiang and Hao Tang and Rui Yan and Jinhui Tang and Zechao Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=eaZlswPyKk} }
Fine-grained image retrieval (FGIR) aims to learn to generate visual representations that distinguish visually similar objects while maintaining generalization. Existing methods propose various techniques to generate discriminative features, but rarely consider the particularity of the FGIR task itself. This paper presents a meticulous analysis leading to the proposal of practical guidelines to identify subcategory-specific discrepancies and generate discriminative features to design high-performance FGIR models. These guidelines include emphasizing the object (G1), highlighting subcategory-specific discrepancies (G2), and employing effective training strategy (G3). Following G1 and G2, we design a novel Dual Visual Filtering mechanism for the plain visual transformer, denoted as DVF, to capture subcategory-specific discrepancies. Specifically, the dual visual filtering mechanism comprises an object-oriented module and a semantic-oriented module. These components serve to magnify objects and identify discriminative regions, respectively. Following G3, we implement a discriminative model training strategy to improve the discriminability and generalization ability of DVF. Extensive analysis and ablation studies confirm the efficacy of our proposed guidelines. Without bells and whistles, our DVF achieves state-of-the-art performance on three widely-used fine-grained datasets in closed-set and open-set settings.
DVF: Advancing Robust and Accurate Fine-Grained Image Retrieval with Retrieval Guidelines
[ "Xin Jiang", "Hao Tang", "Rui Yan", "Jinhui Tang", "Zechao Li" ]
Conference
poster
2404.15771
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=eZpm234cw2
@inproceedings{ he2024robust, title={Robust Variational Contrastive Learning for Partially View-unaligned Clustering}, author={Changhao He and Hongyuan Zhu and Peng Hu and Xi Peng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=eZpm234cw2} }
Although multi-view learning has achieved remarkable progress over the past decades, most existing methods implicitly assume that all views (or modalities) are well-aligned. In practice, however, collecting fully aligned views is challenging due to complexities and discordances in time and space, resulting in the Partially View-unaligned Problem (PVP), such as audio-video asynchrony caused by network congestion. While some methods are proposed to align the unaligned views by learning view-invariant representations, almost all of them overlook specific information across different views for complementarity, limiting performance improvement. To address these problems, we propose a robust framework, dubbed \textbf{V}ariat\textbf{I}onal Con\textbf{T}r\textbf{A}stive \textbf{L}earning (VITAL), designed to learn both common and specific information simultaneously. To be specific, each data sample is first modeled as a Gaussian distribution in the latent space, where the mean estimates the most probable common information, while the variance indicates view-specific information. Second, by using variational inference, VITAL conducts intra- and inter-view contrastive learning to preserve common and specific semantics in the distribution representations, thereby achieving comprehensive perception. As a result, the common representation (mean) could be used to guide category-level realignment, while the specific representation (variance) complements sample semantic information, thereby boosting overall performance. Finally, considering the abundance of False Negative Pairs (FNPs) generated by unsupervised contrastive learning, we propose a robust loss function that seamlessly incorporates FNP rectification into the contrastive learning paradigm. Empirical evaluations on eight benchmark datasets reveal that VITAL outperforms ten state-of-the-art deep clustering baselines, demonstrating its efficacy in both partially and fully aligned scenarios. The Code is available at \url{https://github.com/He-Changhao/2024-MM-VITAL}.
Robust Variational Contrastive Learning for Partially View-unaligned Clustering
[ "Changhao He", "Hongyuan Zhu", "Peng Hu", "Xi Peng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=eUD6yhIM5O
@inproceedings{ liu2024regional, title={Regional Attention For Shadow Removal}, author={Hengxing Liu and Mingjia Li and Xiaojie Guo}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=eUD6yhIM5O} }
Shadow, as a natural consequence of light interacting with objects, plays a crucial role in shaping the aesthetics of an image, which however also impairs the content visibility and overall visual quality. Recent shadow removal approaches employ the mechanism of attention, due to its effectiveness, as a key component. However, they often suffer from two issues including large model size and high computational complexity for practical use. To address these shortcomings, this work devises a lightweight yet accurate shadow removal framework. First, we analyze the characteristics of the shadow removal task to seek the key information required for reconstructing shadow regions and designing a novel regional attention mechanism to effectively capture such information. Then, we customize a Regional Attention Shadow Removal Model (RASM, in short), which leverages non-shadow areas to assist in restoring shadow ones. Unlike existing attention-based models, our regional attention strategy allows each shadow region to interact more rationally with its surrounding non-shadow areas, for seeking the regional contextual correlation between shadow and non-shadow areas. Extensive experiments are conducted to demonstrate that our proposed method delivers superior performance over other state-of-the-art models in terms of accuracy and efficiency, making it appealing for practical applications. Our code will be made publicly available.
Regional Attention For Shadow Removal
[ "Hengxing Liu", "Mingjia Li", "Xiaojie Guo" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=eQ45U7ufe4
@inproceedings{ fang2024toward, title={Toward Robust Live Streaming over {LEO} Satellite Constellations: Measurement, Analysis, and Handover-Aware Adaptation}, author={Hao Fang and Haoyuan Zhao and Jianxin Shi and Miao Zhang and Guanzhen Wu and Yi Ching Chou and FENG WANG and Jiangchuan Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=eQ45U7ufe4} }
Live streaming has experienced significant growth recently. Yet this rise in popularity contrasts with the reality that a substantial segment of the global population still lacks Internet access. The emergence of Low Earth orbit Satellite Networks (LSNs), such as SpaceX's Starlink and Amazon's Project Kuiper, presents a promising solution to this issue. Nevertheless, our measurement study reveals that existing live streaming platforms may not be able to deliver a smooth viewing experience on LSNs due to frequent satellite handovers, leading to frequent rebuffering events. Current state-of-the-art learning-based Adaptive Bitrate (ABR) algorithms, even when trained on satellite network traces, fail to manage the abrupt network variations associated with these handovers effectively. To address these challenges, for the first time, we introduce Satellite-Aware Rate Adaptation (SARA), a versatile and lightweight middleware that can be seamlessly integrated with various ABR algorithms to enhance the performance of live streaming over LSNs. SARA intelligently modulates video playback speed and furnishes ABR algorithms with key insights derived from the distinctive network characteristics of LSNs, thereby aiding ABR algorithms in making informed bitrate selections and effectively minimizing rebuffering events that occur during satellite handovers. Our extensive evaluation shows that SARA can effectively reduce the rebuffering time by an average of 39.41\% and slightly improve latency by 0.65\% while only introducing an overall loss in bitrate by 0.13\%.
Toward Robust Live Streaming over LEO Satellite Constellations: Measurement, Analysis, and Handover-Aware Adaptation
[ "Hao Fang", "Haoyuan Zhao", "Jianxin Shi", "Miao Zhang", "Guanzhen Wu", "Yi Ching Chou", "FENG WANG", "Jiangchuan Liu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=eLpaw4H2Nh
@inproceedings{ chen2024querymatch, title={QueryMatch: A Query-based Contrastive Learning Framework for Weakly Supervised Visual Grounding}, author={Shengxin Chen and Gen Luo and Yiyi Zhou and Xiaoshuai Sun and GUANNAN JIANG and Rongrong Ji}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=eLpaw4H2Nh} }
Visual grounding is a task of locating the object referred by a natural language description. To reduce annotation costs, recent researchers are devoted into one-stage weakly supervised methods for visual grounding, which typically adopt the anchor-text matching paradigm. Despite the efficiency, we identify that anchor representations are often noisy and insufficient to describe object information, which inevitably hinders the vision-language alignments. In this paper, we propose a novel query-based one-stage framework for weakly supervised visual grounding, namely QueryMatch. Different from previous work, QueryMatch represents candidate objects with a set of query features, which inherently establish accurate one-to-one associations with visual objects. In this case, QueryMatch re-formulates weakly supervised visual grounding as a query-text matching problem, which can be optimized via the query-based contrastive learning. Based on QueryMatch, we further propose an innovative strategy for effective weakly supervised learning, namely Negative Sample Quality Estimation (NSQE). In particular, NSQE aims to augment negative training samples by actively selecting high-quality query features. Though this strategy, NSQE can greatly benefit the weakly supervised learning of QueryMatch. To validate our approach, we conduct extensive experiments on three benchmark datasets of two grounding tasks, i.e., referring expression comprehension (REC) and segmentation (RES). Experimental results not only show the state-of-art performance of QueryMatch in two tasks, e.g., over +5\% [email protected] on RefCOCO in REC and over +20\% mIOU on RefCOCO in RES, but also confirm the effectiveness of NSQE in weakly supervised learning. Source codes are available at~\url{https://anonymous.4open.science/r/QueryMatch-A82C}.
QueryMatch: A Query-based Contrastive Learning Framework for Weakly Supervised Visual Grounding
[ "Shengxin Chen", "Gen Luo", "Yiyi Zhou", "Xiaoshuai Sun", "GUANNAN JIANG", "Rongrong Ji" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=eK9ShhDqwu
@inproceedings{ liu2024generative, title={Generative Expressive Conversational Speech Synthesis}, author={Rui Liu and Yifan Hu and Yi Ren and Xiang Yin and Haizhou Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=eK9ShhDqwu} }
Conversational Speech Synthesis (CSS) aims to express a target utterance with the proper speaking style in a user-agent conversation setting. Existing CSS methods employ effective multi-modal context modeling techniques to achieve empathy understanding and expression. However, they often need to design complex network architectures and meticulously optimize the modules within them. In addition, due to the limitations of small-scale datasets containing scripted recording styles, they often fail to simulate real natural conversational styles. To address the above issues, we propose a novel generative expressive CSS system, termed **GPT-Talker**. We transform the multimodal information of the multi-turn dialogue history into discrete token sequences and seamlessly integrate them to form a comprehensive user-agent dialogue context. Leveraging the power of GPT, we predict the token sequence, that includes both semantic and style knowledge, of response for the agent. After that, the expressive conversational speech is synthesized by the conversation-enriched VITS to deliver feedback to the user. Furthermore, we propose a large-scale Natural CSS Dataset called **NCSSD**, that includes both naturally recorded conversational speech in improvised styles and dialogues extracted from TV shows. It encompasses both Chinese and English languages, with a total duration of 236 hours. We conducted comprehensive experiments on the reliability of the NCSSD and the effectiveness of our GPT-Talker. Both subjective and objective evaluations demonstrate that our model outperforms other state-of-the-art CSS systems significantly in terms of naturalness and expressiveness. _The Code, Dataset, and Pre-trained Model are available at: https://github.com/AI-S2-Lab/GPT-Talker._
Generative Expressive Conversational Speech Synthesis
[ "Rui Liu", "Yifan Hu", "Yi Ren", "Xiang Yin", "Haizhou Li" ]
Conference
poster
2407.21491
[ "https://github.com/ai-s2-lab/gpt-talker" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=eJGfwz86Ch
@inproceedings{ dai2024eglcr, title={Eglcr: Edge Structure Guidance and Scale Adaptive Attention for Iterative Stereo Matching}, author={Zhien Dai and Zhaohui Tang and Hu Zhang and Can Tian and Mingjun Pan and Yongfang Xie}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=eJGfwz86Ch} }
Stereo matching is a pivotal technique for depth estimation and has been popularly applied in various computer vision tasks. Although many related methods have been reported recently, they still face some challenges such as significant disparity variations at object boundaries, difficult prediction at large disparity regions, and suboptimal generalization when label distribution varies between source and target domains. Therefore, we propose a stereo-matching model (i.e., EGLCR-Stereo) that utilizes edge structure information with adaptive fusion of multi-scale matching similarity information for disparity estimation. First, we use a lightweight network to predict the initial disparity. We apply large and small-scale similarity feature extraction modules to extract the matching similarity information within the wide-area receptive field and the refined matching similarity information under the local receptive field. Then, we develop a scale adaptive attention module for efficiently fusing information at different scales. Meanwhile, we propose an edge structure-aware module for exploring edge information in the scene. After that, we use an iterative-based strategy for disparity estimation using edge structure information with fused multi-scale matching similarity information. We conduct abundant experiments on some popular stereo matching datasets including Middlebury, KITTI, ETH3D, and Scene Flow. The results show that our proposed EGLCR-Stereo achieves state-of-the-art performance both in accuracy and generalization.
Eglcr: Edge Structure Guidance and Scale Adaptive Attention for Iterative Stereo Matching
[ "Zhien Dai", "Zhaohui Tang", "Hu Zhang", "Can Tian", "Mingjun Pan", "Yongfang Xie" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=eGNXda1ooK
@inproceedings{ zhong2024vlreader, title={{VL}-Reader: Vision and Language Reconstructor is an Effective Scene Text Recognizer}, author={Humen Zhong and Zhibo Yang and Zhaohai Li and Peng Wang and Jun Tang and Wenqing Cheng and Cong Yao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=eGNXda1ooK} }
Text recognition is an inherent integration of vision and language, encompassing the visual texture in stroke patterns and the semantic context among the character sequences. Towards advanced text recognition, there are three key challenges: (1) an encoder capable of representing the visual and semantic distribution; (2) a decoder supervises the alignment between vision and semantics; and (3) consistency in the framework during pre-training and fine-tuning. Inspired by masked autoencoding, a successful pre-training strategy in both vision and language, we propose an innovative scene text recognition approach, named VL-Reader. The novelty of VL-Reader lies in that the interplay between vision and language is pervasive throughout the entire process, not only in the encoding stage but also the decoding stage, which has been previously overlooked. Concretely, we first introduce a Masked Visual-Linguistic Recon- struction (MVLR) objective, which aims at simultaneously modeling visual and linguistic information. Then, we design a Masked Visual- Linguistic Decoder (MVLD) to further leverage bi-modal feature interaction. The architecture of VL-Reader maintains consistency from training to inference. In the pre-training stage, VL-Reader reconstructs both masked visual and text tokens, while in the fine- tuning stage, the network degrades to reconstruct all characters from an image without any masked regions. VL reader achieves an average accuracy of 97.1% on six typical datasets, surpassing the SOTA by 1.1%. The improvement was even more significant on chal- lenging datasets. The results demonstrate that vision and language reconstructor can serve as an effective scene text recognizer.
VL-Reader: Vision and Language Reconstructor is an Effective Scene Text Recognizer
[ "Humen Zhong", "Zhibo Yang", "Zhaohai Li", "Peng Wang", "Jun Tang", "Wenqing Cheng", "Cong Yao" ]
Conference
poster
2409.11656
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=e6UOkENriP
@inproceedings{ gan2024dac, title={{DAC}: 2D-3D Retrieval with Noisy Labels via Divide-and-Conquer Alignment and Correction}, author={Chaofan Gan and Yuanpeng Tu and Yuxi Li and Weiyao Lin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=e6UOkENriP} }
With the recent burst of 2D and 3D data, cross-modal retrieval has attracted increasing attention recently. However, manual labeling by non-experts will inevitably introduce corrupted annotations given ambiguous 2D/3D content, leading to performance degradation. Though previous works have addressed this issue by designing a naive division strategy with hand-crafted thresholds, their performance generally exhibits great sensitivity to the threshold value, implying their poor robustness in real-world scenarios. Besides, they fail to fully utilize the valuable supervisory signals within each divided subset. To tackle this problem, we propose a Divide-and-conquer 2D-3D cross-modal Alignment and Correction framework (DAC), which comprises Multimodal Dynamic Division (MDD) and Adaptive Alignment and Correction (AAC). Specifically, the former performs accurate sample division by adaptive credibility modeling for each sample based on the compensation information within multimodal loss distribution. Then in AAC, samples in distinct subsets are exploited with different alignment strategies to fully enhance the semantic compactness and meanwhile alleviate over-fitting to noisy labels, where a self-correction strategy is introduced to improve the quality of representation by mining the valuable supervisory signals from multimodal predictions as well. Moreover. To evaluate the effectiveness in real-world scenarios, we introduce a challenging noisy benchmark, namely Objaverse-N200, which comprises 200k-level samples annotated with 1156 realistic noisy labels. Extensive experiments on both traditional and the newly proposed benchmarks demonstrate the generality and superiority of our DAC, where DAC outperforms state-of-the-art models by a large margin (i.e., with +5.9\% gain on ModelNet40 and +5.8\% on Objaverse-N200).
DAC: 2D-3D Retrieval with Noisy Labels via Divide-and-Conquer Alignment and Correction
[ "Chaofan Gan", "Yuanpeng Tu", "Yuxi Li", "Weiyao Lin" ]
Conference
poster
2407.17779
[ "https://github.com/ganchaofan0000/DAC" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=dxbHuJtIpK
@inproceedings{ hou2024virtual, title={Virtual Visual-Guided Domain-Shadow Fusion via Modal Exchanging for Domain-Specific Multi-Modal Neural Machine Translation}, author={Zhenyu Hou and Junjun Guo}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=dxbHuJtIpK} }
Incorporating domain-specific visual information into text poses one of the critical challenges for domain-specific multi-modal neural machine translation (DMNMT). While most existing DMNMT methods often borrow multi-modal fusion frameworks from multi-modal neural machine translation (MNMT) in the general domain, they overlook the domain gaps between general and specific domains. Visual-to-textual interaction in a specific domain frequently exhibits multi-focus characteristics, making it difficult to consistently focus on domain-specific multi-visual details using traditional multi-modal fusion frameworks. This challenge can lead to a decrease in machine translation performance for domain-specific terms. To tackle this problem, this paper presents a virtual visual scene-guided domain-shadow multi-modal fusion mechanism to simultaneously integrate multi-grained domain visual details and text with the guidance of modality-agnostic virtual visual scene, thereby enhancing machine translation performance for DMNMT, especially for domain terms. Specifically, we first adopt a modality-mixing selection-voting strategy to generate modality-mixed domain-shadow representations through layer-by-layer intra-modality selection and inter-modality exchanging. Then, we gradually aggregate modality-mixed domain representations and text across modality boundaries with the guidance of a modality-agnostic virtual visual scene to enhance the collaboration between domain characteristics and textual semantics. The experimental results on three benchmark datasets demonstrate that our proposed approach outperforms the state-of-the-art (SOTA) methods in all machine translation tasks. The in-depth analysis further highlights the robustness and generalizability of our approach across various scenarios. Our code is available on https://github.com/HZY2023/VVDF.
Virtual Visual-Guided Domain-Shadow Fusion via Modal Exchanging for Domain-Specific Multi-Modal Neural Machine Translation
[ "Zhenyu Hou", "Junjun Guo" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ds0m0KGG8j
@inproceedings{ yang2024learning, title={Learning with Alignments: Tackling the Inter- and Intra-domain Shifts for Cross-multidomain Facial Expression Recognition}, author={Yuxiang Yang and Lu Wen and Xinyi Zeng and Yuanyuan Xu and Xi Wu and Jiliu Zhou and Yan Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ds0m0KGG8j} }
Facial Expression Recognition (FER) holds significant importance in human-computer interactions. Existing cross-domain FER methods often transfer knowledge solely from a single labeled source domain to an unlabeled target domain, neglecting the comprehensive information across multiple sources. Nevertheless, cross-multidomain FER (CMFER) is very challenging for (i) the inherent inter-domain shifts across multiple domains and (ii) the intra-domain shifts stemming from the ambiguous expressions and low inter-class distinctions. In this paper, we propose a novel Learning with Alignments CMFER framework, named LA-CMFER, to handle both inter- and intra-domain shifts. Specifically, LA-CMFER is constructed with a global branch and a local branch to extract features from the full images and local subtle expressions, respectively. Based on this, LA-CMFER presents a dual-level inter-domain alignment method to force the model to prioritize hard-to-align samples in knowledge transfer at a sample level while gradually generating a well-clustered feature space with the guidance of class attributes at a cluster level, thus narrowing the inter-domain shifts. To address the intra-domain shifts, LA-CMFER introduces a multi-view intra-domain alignment method with a multi-view clustering consistency constraint where a prediction similarity matrix is built to pursue consistency between the global and local views, thus refining pseudo labels and eliminating latent noise. Extensive experiments on six benchmark datasets have validated the superiority of our LA-CMFER.
Learning with Alignments: Tackling the Inter- and Intra-domain Shifts for Cross-multidomain Facial Expression Recognition
[ "Yuxiang Yang", "Lu Wen", "Xinyi Zeng", "Yuanyuan Xu", "Xi Wu", "Jiliu Zhou", "Yan Wang" ]
Conference
poster
2407.05688
[ "https://github.com/yyx-future/la-cmfer" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=dqZwMihjXF
@inproceedings{ chen2024causal, title={Causal Visual-semantic Correlation for Zero-shot Learning}, author={Shuhuang Chen and Dingjie Fu and Shiming Chen and shuo Ye and Wenjin Hou and Xinge You}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=dqZwMihjXF} }
Zero-Shot learning (ZSL) correlates visual samples and shared semantic information to transfer knowledge from seen classes to unseen classes. Existing methods typically establish visual-semantic correlation by aligning visual and semantic features, which are extracted from visual samples and semantic information, respectively. However, instance-level images, owing to singular observation perspectives and diverse individuals, cannot exactly match the comprehensive semantic information defined at the class level. Direct feature alignment imposes correlation between mismatched vision and semantics, resulting in spurious visual-semantic correlation. To address this, we propose a novel method termed Causal Visual-semantic Correlation (CVsC) to learn substantive visual-semantic correlation for ZSL. Specifically, we utilize a Visual Semantic Attention module to facilitate interaction between vision and semantics, thereby identifying attribute-related visual features. Furthermore, we design a Conditional Correlation Loss to properly utilize semantic information as supervision for establishing visual-semantic correlation. Moreover, we introduce counterfactual intervention applied to attribute-related visual features, and maximize their impact on semantic and target predictions to enhance substantive visual-semantic correlation. Extensive experiments conducted on three benchmark datasets (i.e., CUB, SUN, and AWA2) demonstrate that our CVSC outperforms existing state-of-the-art methods.
Causal Visual-semantic Correlation for Zero-shot Learning
[ "Shuhuang Chen", "Dingjie Fu", "Shiming Chen", "shuo Ye", "Wenjin Hou", "Xinge You" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=doXzSQURuU
@inproceedings{ li2024multimodal, title={Multi-Modal Inductive Framework for Text-Video Retrieval}, author={Qian Li and Yucheng Zhou and Cheng Ji and Feihong Lu and Jianian Gong and Shangguang Wang and Jianxin Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=doXzSQURuU} }
Text-video retrieval (TVR) identifies relevant videos based on textual queries. Existing methods are limited by their ability to understand and connect different modalities, resulting in increased difficulty in retrievals. In this paper, we propose a generation-based TVR paradigm facilitated by LLM distillation to better learn and capture deep retrieval knowledge for text-video retrieval, amidsting the rapid evolution of Large Language Models. Specifically, we first design the fine-tuning large vision-language model that leverages the knowledge learned from language models to enhance the alignment of semantic information between the text and video modalities. It also incorporates an inductive reasoning mechanism, which focuses on incorporating important temporal and spatial features into the video embeddings. We further design question prompt clustering to select the most important prompts, considering their contribution to improving retrieval performance. Experimental results show that our approach achieves excellent performance on two benchmark datasets compared to its competitors.
Multi-Modal Inductive Framework for Text-Video Retrieval
[ "Qian Li", "Yucheng Zhou", "Cheng Ji", "Feihong Lu", "Jianian Gong", "Shangguang Wang", "Jianxin Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=dnCmL6ntNJ
@inproceedings{ steinert2024, title={256 Metaverse Recording Dataset}, author={Patrick Steinert and Stefan Wagenpfeil and Ingo Frommholz and Matthias Hemmje}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=dnCmL6ntNJ} }
The metaverse is an evolving field and the subject of multimedia research. In this paper, we introduce the 256-MetaverseRecords dataset, a novel and extensive collection of annotated screen recordings in the form of videos from various virtual worlds of the metaverse. We describe the process of creating the dataset, the quality criteria for the annotations, and the exploration of the dataset. We also show four experiments to evaluate the performance of different feature extraction methods for Metaverse Recordings (MVRs): MVR segmentation, audio event detection, and object and interaction detection based on this dataset. Our results demonstrate that existing methods have limitations and leave challenges in dealing with the diversity and complexity of metaverse data, and that more research is needed to develop metaverse-specific techniques. Our dataset can serve as a valuable resource for the research community and foster the development of new applications and solutions for the metaverse.
256 Metaverse Recording Dataset
[ "Patrick Steinert", "Stefan Wagenpfeil", "Ingo Frommholz", "Matthias Hemmje" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=dmkAVtukfK
@inproceedings{ zang2024generalized, title={Generalized Source-free Domain-adaptive Segmentation via Reliable Knowledge Propagation}, author={Qi Zang and Shuang Wang and Dong Zhao and Yang HU and Dou Quan and Jinlong Li and Nicu Sebe and Zhun Zhong}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=dmkAVtukfK} }
Unanticipated domain shifts can severely degrade model performance, prompting the need for model adaptation techniques (i.e., Source-free Domain Adaptation (SFDA)) to adapt a model to new domains without accessing source data. However, existing SFDA methods often sacrifice source domain performance to improve adaptation on the target, limiting overall model capability. In this paper, we focus on a more challenging paradigm in semantic segmentation, Generalized SFDA (G-SFDA), aiming to achieve robust performance on both source and target domains. To achieve this, we propose a novel G-SFDA framework, Reliable Knowledge Propagation (RKP), for semantic segmentation tasks, which leverages the text-to-image diffusion model to propagate reliable semantic knowledge from the segmentation model. The key of RKP lies in aggregating the predicted reliable but scattered segments into a complete semantic layout and using them to activate the diffusion model for conditional generation. Subsequently, diverse images with multiple domain factors can be synthesized to retrain the segmentation model. This enables the segmentation model to learn domain-invariant knowledge across multiple domains, improving its adaptability to target domain, maintaining discriminability to source domain, and even handling unseen domains. Our model-agnostic RKP framework establishes new state-of-the-art across current SFDA segmentation benchmarks, significantly advancing various SFDA methods. The code will be open source.
Generalized Source-free Domain-adaptive Segmentation via Reliable Knowledge Propagation
[ "Qi Zang", "Shuang Wang", "Dong Zhao", "Yang HU", "Dou Quan", "Jinlong Li", "Nicu Sebe", "Zhun Zhong" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=djkKU3NIqH
@inproceedings{ xie2024moba, title={Mo{BA}: Mixture of Bidirectional Adapter for Multi-modal Sarcasm Detection}, author={Yifeng Xie and Zhihong Zhu and Xin Chen and Zhanpeng Chen and Zhiqi Huang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=djkKU3NIqH} }
In the field of multi-modal learning, model parameters are typically large, necessitating the use of parameter-efficient fine-tuning (PEFT) techniques. These methods have been pivotal in enhancing training efficiency for downstream tasks in almost all situations. However, directly applying PEFT methods struggles to fully address the intricate demands of multi-modal tasks, such as multi-modal sarcasm detection (MSD), which demands the extraction and comparison of cues from different modalities. MSD, particularly when reliant on textual and visual modalities, faces challenges in identifying sarcasm's incongruity. This issue often arises from the lack of intermodality interaction during tuning, resulting in a disconnect between textual and visual information. In this paper, we introduce a novel approach called Bi-directional Adapter (BA), designated as MoBA. This approach is designed to minimize training parameters while enhancing the model's ability to interpret sarcasm across modalities. By facilitating an exchange between textual and visual information through a low-rank representation, our method adeptly captures the nuances of sarcastic expressions with a reduced number of training parameters. Our empirical studies, carried out on two publicly accessible and emerging datasets, demonstrate that our model substantially improves sarcasm detection accuracy. These findings indicate that our approach provides a more reliable and efficient solution to address the complexities of MSD.
MoBA: Mixture of Bi-directional Adapter for Multi-modal Sarcasm Detection
[ "Yifeng Xie", "Zhihong Zhu", "Xin Chen", "Zhanpeng Chen", "Zhiqi Huang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=di99IjsY2T
@inproceedings{ li2024wavedn, title={Wave{DN}: A Wavelet-based Training-free Zero-shot Enhancement for Vision-Language Models}, author={Jiulin Li and Mengyu Yang and Ye Tian and Lanshan Zhang and Yongchun Lu and Jice Liu and Wendong Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=di99IjsY2T} }
Vision-Language Models (VLMs) built on contrastive learning, such as CLIP, demonstrate great transferability and excel in downstream tasks like zero-shot classification and retrieval. To further enhance the performance of VLMs, existing methods have introduced additional parameter modules or fine-tuned VLMs on downstream datasets. However, these methods often fall short in scenarios where labeled data for downstream tasks is either unavailable or insufficient for fine-tuning, and the training of additional parameter modules may considerably impair the existing transferability of VLMs. To alleviate this issue, we introduce WaveDN, a wavelet-based distribution normalization method that can boost the VLMs' performance on downstream tasks without parametric modules or labeled data. Initially, wavelet distributions are extracted from the embeddings of the sampled, unlabeled test samples. Subsequently, WaveDN conducts a hierarchical normalization across the wavelet coefficients of all embeddings, thereby incorporating the distributional characteristics of the test data. Finally, the normalized embeddings are reconstructed via inverse wavelet transformation, facilitating the computation of similarity metrics between the samples. Through extensive experiments on two downstream tasks, using a total of 14 datasets covering text-image and text-audio modal data, WaveDN has demonstrated superiority compared to state-of-the-art methods.
WaveDN: A Wavelet-based Training-free Zero-shot Enhancement for Vision-Language Models
[ "Jiulin Li", "Mengyu Yang", "Ye Tian", "Lanshan Zhang", "Yongchun Lu", "Jice Liu", "Wendong Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=dhptoztu6b
@inproceedings{ zhao2024lanecmkt, title={Lane{CMKT}: Boosting Monocular 3D Lane Detection with Cross-Modal Knowledge Transfer}, author={Runkai Zhao and Heng Wang and Weidong Cai}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=dhptoztu6b} }
Detecting 3D lane lines from monocular images is garnering increasing attention in the Autonomous Driving (AD) area due to its cost-effective edge. However, current monocular image models capture road scenes lacking 3D spatial awareness, which is error-prone to adverse circumstance changes. In this work, we design a novel cross-modal knowledge transfer scheme, namely LaneCMKT, to address this issue by transferring 3D geometric cues learned from a pre-trained LiDAR model to the image model. Performing on the unified Bird's-Eye-View (BEV) grid, our monocular image model acts as a student network and benefits from the spatial guidance of the 3D LiDAR teacher model over the intermediate feature space. Since LiDAR points and image pixels are intrinsically two different modalities, to facilitate such heterogeneous feature transfer learning at matching levels, we propose a dual-path knowledge transfer mechanism. We divide the feature space into shallow and deep paths where the image student model is prompted to focus on lane-favored geometric cues from the LiDAR teacher model. We conduct extensive experiments and thorough analysis on the large-scale public benchmark OpenLane. Our model achieves notable improvements over the image baseline by 5.3% and the current BEV-driven SoTA method by 2.7% in the F1 score, without introducing any extra computational overhead. We also observe that the 3D abilities grabbed from the teacher model are critical for dealing with complex spatial lane properties from a 2D perspective.
LaneCMKT: Boosting Monocular 3D Lane Detection with Cross-Modal Knowledge Transfer
[ "Runkai Zhao", "Heng Wang", "Weidong Cai" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=dhG6bA25D3
@inproceedings{ wang2024fedevalfair, title={FedEvalFair: A Privacy-Preserving and Statistically Grounded Federated Fairness Evaluation Framework}, author={Zhongchi Wang and Hailong Sun and Zhengyang Zhao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=dhG6bA25D3} }
Federated learning has rapidly gained attention in the industrial sector due to its significant advantages in protecting privacy. However, ensuring the fairness of federated learning models post-deployment presents a challenge in practical applications. Given that clients typically rely on limited private datasets to assess model fairness, this constrains their ability to make accurate judgments about the fairness of the model. To address this issue, we propose an innovative evaluation framework, FedEvalFair, which integrates private data from multiple clients to comprehensively assess the fairness of models in actual deployment without compromising data privacy. Firstly, FedEvalFair draws on the concept of federated learning to achieve a comprehensive assessment while protecting privacy. Secondly, based on the statistical concept of 'estimating the population from the sample', FedEvalFair is capable of estimating the fairness performance of the model in real-world settings from a limited data sample. Thirdly, we have designed a flexible two-stage evaluation strategy based on statistical hypothesis testing. We verified the theoretical performance and sensitivity to fairness variations of FedEvalFair using Monte Carlo simulations, demonstrating the superior performance of its two-stage evaluation strategy. Additionally, we validated the effectiveness of the FedEvalFair method on real-world datasets, including UCI Adult and eICU, and demonstrated its stability in dealing with real-world data distribution changes compared to traditional evaluation methods.
FedEvalFair: A Privacy-Preserving and Statistically Grounded Federated Fairness Evaluation Framework
[ "Zhongchi Wang", "Hailong Sun", "Zhengyang Zhao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=deb6LE1bqm
@inproceedings{ liu2024anitalker, title={AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding}, author={Tao Liu and Feilong.chen and Shuai Fan and Chenpeng Du and Qi Chen and Xie Chen and Kai Yu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=deb6LE1bqm} }
The paper introduces AniTalker, an innovative framework designed to generate lifelike talking faces from a single portrait. Unlike existing models that primarily focus on verbal cues such as lip synchronization and fail to capture the complex dynamics of facial expressions and nonverbal cues, AniTalker employs a universal motion representation. This innovative representation effectively captures a wide range of facial dynamics, including subtle expressions and head movements. AniTalker enhances motion depiction through two self-supervised learning strategies: the first involves reconstructing target video frames from source frames within the same identity to learn subtle motion representations, and the second develops an identity encoder using metric learning while actively minimizing mutual information between the identity and motion encoders. This approach ensures that the motion representation is dynamic and devoid of identity-specific details, significantly reducing the need for labeled data. Additionally, the integration of a diffusion model with a variance adapter allows for the generation of diverse and controllable facial animations. This method not only demonstrates AniTalker’s capability to create detailed and realistic facial movements but also underscores its potential in crafting dynamic avatars for real-world applications. Synthetic results can be viewed at https://anitalker.github.io.
AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding
[ "Tao Liu", "Feilong.chen", "Shuai Fan", "Chenpeng Du", "Qi Chen", "Xie Chen", "Kai Yu" ]
Conference
poster
2405.03121
[ "https://github.com/x-lance/anitalker" ]
https://huggingface.co/papers/2405.03121
1
1
0
7
[]
[]
[ "Delik/Anitalker", "Uhhy/Anitalker", "nikkmitra/talking_image", "Hhblvjgvg/Anitalker" ]
[]
[]
[ "Delik/Anitalker", "Uhhy/Anitalker", "nikkmitra/talking_image", "Hhblvjgvg/Anitalker" ]
1
null
https://openreview.net/forum?id=de7GoqU3Uv
@inproceedings{ li2024freepih, title={Free{PIH}: Training-Free Painterly Image Harmonization with Diffusion Model}, author={Ruibin Li and Jingcai Guo and Qihua Zhou and Song Guo}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=de7GoqU3Uv} }
This paper provides an efficient training-free painterly image harmonization (PIH) method, dubbed FreePIH, that leverages only a pre-trained diffusion model to achieve state-of-the-art harmonization results. Unlike existing methods that require either training auxiliary networks or fine-tuning a large pre-trained backbone, or both, to harmonize a foreground object with a painterly-style background image, our FreePIH tames the denoising process as a plug-in module for foreground image style transfer. Specifically, we find that the very last few steps of the denoising (i.e., generation) process strongly correspond to the stylistic information of images, and based on this, we propose to augment the latent features of both the foreground and background images with Gaussians for a direct denoising-based harmonization. To guarantee the fidelity of the harmonized image, we make use of multi-scale features to enforce the consistency of the content and stability of the foreground objects in the latent space, and meanwhile, aligning both fore-/back-grounds with the same style. Moreover, to accommodate the generation with more structural and textural details, we fur- ther integrate text prompts to attend to the latent features, hence improving the generation quality. Quantitative and qualitative eval- uations on COCO and LAION 5B datasets demonstrate that our method can surpass representative baselines by large margins.
FreePIH: Training-Free Painterly Image Harmonization with Diffusion Model
[ "Ruibin Li", "Jingcai Guo", "Qihua Zhou", "Song Guo" ]
Conference
oral
2311.14926
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=daxLPN9GUT
@inproceedings{ mao2024clusterdriven, title={Cluster-driven Personalized Federated Recommendation with Interest-aware Graph Convolution Network for Multimedia}, author={Xingyuan Mao and Yuwen Liu and Lianyong Qi and Li Duan and Xiaolong Xu and Xuyun Zhang and Wanchun Dou and Amin Beheshti and Xiaokang Zhou}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=daxLPN9GUT} }
Federated learning addresses privacy concerns in multimedia recommender systems by enabling collaborative model training without exchanging raw data. However, existing federated recommendation models are mainly based on basic backbones like Matrix Factorization (MF), which are inadequate to capture complex implicit interactions between users and multimedia content. Graph Convolutional Networks (GCNs) offer a promising method by utilizing the information from high-order neighbors, but face challenges in federated settings due to problems such as over-smoothing, data heterogeneity, and elevated communication expenses. To resolve these problems, we propose a Cluster-driven Personalized Federated Recommender System with Interest-aware Graph Convolution Network (CPF-GCN) for multimedia recommendation. CPF-GCN comprises a local interest-aware GCN module that optimizes node representations through subgraph-enhanced adaptive graph convolution operations, mitigating the over-smoothing problem by adaptively extracting information from layers and selectively utilizing high-order connectivity based on user interests. Simultaneously, a cluster-driven aggregation approach at the server significantly reduces communication costs by selectively aggregating models from clusters. The aggregation produces a global model and cluster-level models, combining them with the user's local model allows us to tailor the recommendation model for the user, achieving personalized recommendations. Moreover, we propose an adversarial optimization technique to further augment the robustness of CPF-GCN. Experiments on three datasets demonstrate that CPF-GCN significantly outperforms the state-of-the-art models.
Cluster-driven Personalized Federated Recommendation with Interest-aware Graph Convolution Network for Multimedia
[ "Xingyuan Mao", "Yuwen Liu", "Lianyong Qi", "Li Duan", "Xiaolong Xu", "Xuyun Zhang", "Wanchun Dou", "Amin Beheshti", "Xiaokang Zhou" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=dZLqnv14mD
@inproceedings{ sun2024incremental, title={Incremental Learning via Robust Parameter Posterior Fusion}, author={Wenju Sun and Qingyong Li and Siyu Zhang and Wen Wang and Yangliao Geng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=dZLqnv14mD} }
The posterior estimation of parameters based on Bayesian theory is a crucial technique in Incremental Learning (IL). The estimated posterior is typically utilized to impose loss regularization, which aligns the current training model parameters with the previously learned posterior to mitigate catastrophic forgetting, a major challenge in IL. However, this additional loss regularization can also impose detriment to the model learning, preventing it from reaching the true global optimum. To overcome this limitation, this paper introduces a novel Bayesian IL framework, Robust Parameter Posterior Fusion (RP$^2$F). Unlike traditional methods, RP$^2$F directly estimates the parameter posterior for new data without introducing extra loss regularization, which allows the model to accommodate new knowledge more sufficiently. It then fuses this new posterior with the existing ones based on the Maximum A Posteriori (MAP) principle, ensuring effective knowledge sharing across tasks. Furthermore, RP$^2$F incorporates a common parameter-robustness priori to facilitate a seamless integration during posterior fusion. Comprehensive experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet datasets show that RP$^2$F not only effectively mitigates catastrophic forgetting but also achieves backward knowledge transfer.
Incremental Learning via Robust Parameter Posterior Fusion
[ "Wenju Sun", "Qingyong Li", "Siyu Zhang", "Wen Wang", "Yangliao Geng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=dTjFknf5Q0
@inproceedings{ chen2024a, title={A Novel Confidence Guided Training Method for Conditional {GAN}s with Auxiliary Classifier}, author={Qi Chen and Wenjie Liu and Hu Ding}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=dTjFknf5Q0} }
Conditional Generative Adversarial Network (cGAN) is an important type of GAN which is often equipped with an auxiliary classifier. However, existing cGANs usually have the issue of mode collapse which can incur unstable performance in practice. In this paper, we propose a novel stable training method for cGANs with well preserving the generation fidelity and diversity. Our key ideas are designing efficient adversarial training strategies for the auxiliary classifier and mitigating the overconfidence issue caused by the cross-entropy loss. We propose a classifier-based cGAN called Confidence Guided Generative Adversarial Networks (CG-GAN) by introducing the adversarial training to a $K$-way classifier. In particular, we show in theory that the obtained $K$-way classifier can encourage the generator to learn the real joint distribution. To further enhance the performance and stability, we propose to establish a high-entropy prior label distribution for the generated data and incorporate a reverse KL divergence term into the minimax loss of CG-GAN. Through a comprehensive set of experiments on the popular benchmark datasets, including the large-scale dataset ImageNet, we demonstrate the advantages of our proposed method over several state-of-the-art cGANs.
A Novel Confidence Guided Training Method for Conditional GANs with Auxiliary Classifier
[ "Qi Chen", "Wenjie Liu", "Hu Ding" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=dL9FjUGdXq
@inproceedings{ jin2024calibrating, title={Calibrating Prompt from History for Continual Vision-Language Retrieval and Grounding}, author={Tao Jin and Weicai Yan and Ye Wang and Sihang Cai and Shuaiqifan and Zhou Zhao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=dL9FjUGdXq} }
In the field of machine learning, continual learning is a crucial concept that allows models to adapt to non-stationary data distributions. However, most of the existing works focus on uni-modal settings and ignore the multi-modal data. In this paper, to enable neural networks better understand diverse modalities in real-world scenario, we investigate continual learning for two typical vision-language applications, i.e. retrieval and grounding. Instead of conventional exemplar-based methods, we leverage the pre-trained transformer model (e.g. CLIP/GLIP) and the prompt technique to tackle this problem. Under this scheme, we identify two critical limitations in existing methods: (1) Unfamiliarity across tasks, which prevents task-specific prompts from achieving forward propagation; and (2) Heterogeneity between modalities, which makes it difficult to guarantee a consistent optimization direction for prompts of different modalities. To overcome these constraints, we design Historical Prompt Calibration that includes two objectives to calibrate prompts. First, the intra-modal relevance estimation helps encode sufficient task-specific information for prompts, with the help a relevance estimator developed for recognizing task relevance. Second, the inter-modal consistency alignment enhances the agreement of the two modality-specific prompts in the current task by contrasting them with the prompts from previous tasks. We evaluate the superiority of our strategy over state-of-the arts methods by four vision-language applications, including two retrieval tasks (i.e. image- and video-text retrieval) and two grounding tasks (i.e. referring expression comprehension and segmentation).
Calibrating Prompt from History for Continual Vision-Language Retrieval and Grounding
[ "Tao Jin", "Weicai Yan", "Ye Wang", "Sihang Cai", "Shuaiqifan", "Zhou Zhao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=dJstz3rQ2f
@inproceedings{ lin2024triple, title={Triple Alignment Strategies for Zero-shot Phrase Grounding under Weak Supervision}, author={Pengyue Lin and Ruifan Li and Yuzhe Ji and Zhihan Yu and Fangxiang Feng and Zhanyu Ma and Xiaojie Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=dJstz3rQ2f} }
Phrase Grounding, i.e., PG aims to locate objects referred by noun phrases. Recently, PG under weak supervision (i.e., grounding without region-level annotations) and zero-shot PG (i.e., grounding from seen categories to unseen ones) are proposed, respectively. However, for real-world applications these two approaches are limited due to slight annotations and numerable categories during training. In this paper, we propose a framework of zero-shot PG under weak supervision. Specifically, our PG framework is built on triple alignment strategies. Firstly, we propose a region-text alignment (RTA) strategy to build region-level attribute associations via CLIP. Secondly, we propose a domain alignment (DomA) strategy by minimizing the difference between distributions of seen classes in the training and those of the pre-training. Thirdly, we propose a category alignment (CatA) strategy by considering both category semantics and region-category relations. Extensive experiment results show that our proposed PG framework outperforms previous zero-shot methods and achieves competitive performance compared with existing weakly-supervised methods. The code and data will be publicly available at GitHub after double-blind phase.
Triple Alignment Strategies for Zero-shot Phrase Grounding under Weak Supervision
[ "Pengyue Lin", "Ruifan Li", "Yuzhe Ji", "Zhihan Yu", "Fangxiang Feng", "Zhanyu Ma", "Xiaojie Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=dFWi3YlurU
@inproceedings{ feng2024relational, title={Relational Diffusion Distillation For Efficient Image Generation}, author={Weilun Feng and Chuanguang Yang and Zhulin An and Libo Huang and Boyu Diao and Fei Wang and Yongjun Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=dFWi3YlurU} }
Although the diffusion model has achieved remarkable performance in the field of image generation, its high inference delay hinders its wide application in edge devices with scarce computing resources. Therefore, many training-free sampling methods have been proposed to reduce the number of sampling steps required for diffusion models. However, they perform poorly under a very small number of sampling steps. Thanks to the emergence of knowledge distillation technology, the existing training scheme methods have achieved excellent results at very low step numbers. However, the current methods mainly focus on designing novel diffusion model sampling methods with knowledge distillation. How to transfer better diffusion knowledge from teacher models is a more valuable problem but rarely studied. Therefore, we propose Relational Diffusion Distillation (RDD), a novel distillation method tailored specifically for distilling diffusion models. Unlike existing methods that simply align teacher and student models at pixel level or feature distributions, our method introduces cross-sample relationship interaction during the distillation process and alleviates the memory constraints induced by multiple sample interactions. Our RDD significantly enhances the effectiveness of the progressive distillation framework within the diffusion model. Extensive experiments on several datasets (e.g., CIFAR-10 and ImageNet) demonstrate that our proposed RDD leads to 1.47 FID decrease and 256x speed-up, compared to state-of-the-art diffusion distillation methods. Our code will be attached to the supplementary material.
Relational Diffusion Distillation For Efficient Image Generation
[ "Weilun Feng", "Chuanguang Yang", "Zhulin An", "Libo Huang", "Boyu Diao", "Fei Wang", "Yongjun Xu" ]
Conference
oral
2410.07679
[ "https://github.com/cantbebetter2/rdd" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=dAAy8no18G
@inproceedings{ lin2024consistent, title={Consistent123: One Image to Highly Consistent 3D Asset Using Case-Aware Diffusion Priors}, author={Yukang Lin and Haonan Han and Chaoqun Gong and Zunnan Xu and Yachao Zhang and Xiu Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=dAAy8no18G} }
Reconstructing 3D objects from a single image guided by pretrained diffusion models has demonstrated promising outcomes. However, due to utilizing the case-agnostic rigid strategy, their generalization ability to arbitrary cases and the 3D consistency of reconstruction are still poor. In this work, we propose Consistent123, a case-aware two-stage method for highly consistent 3D asset reconstruction from one image with both 2D and 3D diffusion priors. In the first stage, Consistent123 utilizes only 3D structural priors for sufficient geometry exploitation, with a CLIP-based case-aware adaptive detection mechanism embedded within this process. In the second stage, 2D texture priors are introduced and progressively take on a dominant guiding role, delicately sculpting the details of the 3D model. Consistent123 aligns more closely with the evolving trends in guidance requirements, adaptively providing adequate 3D geometric initialization and suitable 2D texture refinement for different objects. Consistent123 can obtain highly 3D-consistent reconstruction and exhibits strong generalization ability across various objects. Qualitative and quantitative experiments show that our method significantly outperforms state-of-the-art image-to-3D methods.
Consistent123: One Image to Highly Consistent 3D Asset Using Case-Aware Diffusion Priors
[ "Yukang Lin", "Haonan Han", "Chaoqun Gong", "Zunnan Xu", "Yachao Zhang", "Xiu Li" ]
Conference
poster
2309.17261
[ "" ]
https://huggingface.co/papers/2309.17261
0
0
0
6
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=dA6yat7UNM
@inproceedings{ dong2024realistic, title={Realistic Full-Body Motion Generation from Sparse Tracking with State Space Model}, author={Kun Dong and Jian Xue and Zehai Niu and Xing Lan and Ke Lv and Qingyuan Liu and Xiaoyu Qin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=dA6yat7UNM} }
In the domain of generative multimedia and interactive experiences, generating realistic and accurate full-body poses from sparse tracking is crucial for many real-world applications, while achieving sequence modeling and efficient motion generation remains challenging. Recently, state space models (SSMs) with efficient hardware-aware designs (i.e., Mamba) have shown great potential for sequence modeling, particularly in temporal contexts. However, processing motion data is still challenging for SSMs. Specifically, the sparsity of input conditions makes motion generation an ill-posed problem. Moreover, the complex structure of the human body further complicates this task. To address these issues, we present Motion Mamba Diffusion (MMD), a novel conditional diffusion model, which effectively utilizes the sequence modeling capability of SSMs and the robust generation ability of diffusion models to track full-body poses accurately. In particular, we design a bidirectional Temporal Mamba Module (TMM) to model motion sequence. Additionally, a Spatial Mamba Module (SMM) is further proposed for feature enhancement within a single frame. Extensive experiments on the large motion capture dataset (AMASS) demonstrate that our proposed approach outperforms the latest methods in terms of accuracy and smoothness and achieves new state-of-the-art performance. Moreover, our approach runs in real-time, making it ideal for employment in practical applications. The source code will be made public upon acceptance of this paper.
Realistic Full-Body Motion Generation from Sparse Tracking with State Space Model
[ "Kun Dong", "Jian Xue", "Zehai Niu", "Xing Lan", "Ke Lv", "Qingyuan Liu", "Xiaoyu Qin" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=d4A0Cw1gVS
@inproceedings{ yu2024exploring, title={Exploring Deeper! Segment Anything Model with Depth Perception for Camouflaged Object Detection}, author={Zhenni Yu and Xiaoqin Zhang and LiZhao and Yi Bin and Guobao Xiao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=d4A0Cw1gVS} }
This paper introduces a new Segment Anything Model with Depth Perception (DSAM) for Camouflaged Object Detection (COD). DSAM exploits the zero-shot capability of SAM to realize precise segmentation in the RGB-D domain. It consists of the Prompt-Deeper Module and the Finer Module. The Prompt-Deeper Module utilizes knowledge distillation and the Bias Correction Module to achieve the interaction between RGB features and depth features, especially using depth features to correct erroneous parts in RGB features. Then, the interacted features are combined with the box prompt in SAM to create a prompt with depth perception. The Finer Module explores the possibility of accurately segmenting highly camouflaged targets from a depth perspective. It uncovers depth cues in areas missed by SAM through mask reversion, self-filtering, and self-attention operations, compensating for its defects in the COD domain. DSAM represents the first step towards the SAM-based RGB-D COD model. It maximizes the utilization of depth features while synergizing with RGB features to achieve multimodal complementarity, thereby overcoming the segmentation limitations of SAM and improving its accuracy in COD. Experimental results on COD benchmarks demonstrate that DSAM achieves excellent segmentation performance and reaches the state-of-the-art (SOTA) on COD benchmarks with less consumption of training resources.
Exploring Deeper! Segment Anything Model with Depth Perception for Camouflaged Object Detection
[ "Zhenni Yu", "Xiaoqin Zhang", "LiZhao", "Yi Bin", "Guobao Xiao" ]
Conference
poster
2407.12339
[ "https://github.com/guobaoxiao/dsam" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=cuWUx0RzgC
@inproceedings{ wang2024causaldriven, title={Causal-driven Large Language Models with Faithful Reasoning for Knowledge Question Answering}, author={Jiawei Wang and Da Cao and Shaofei Lu and Zhanchang Ma and Junbin Xiao and Tat-Seng Chua}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=cuWUx0RzgC} }
In Large Language Models (LLMs), text generation that involves knowledge representation is often fraught with the risk of ''hallucinations'', where models confidently produce erroneous or fabricated content. These inaccuracies often stem from intrinsic biases in the pre-training stage or from the incorporation of human preference biases during the fine-tuning process. To mitigate these issues, we take inspiration from Goldman's causal theory of knowledge, which asserts that knowledge is not merely about having a true belief but also involves a causal connection between the belief and the truth of the proposition. We instantiate this theory within the context of Knowledge Question Answering (KQA) by constructing a causal graph that delineates the pathways between the candidate knowledge and belief. Through the application of the do-calculus rules from structural causal models, we devise an unbiased estimation framework based on this causal graph, thereby establishing a methodology for knowledge modeling grounded in causal inference. The resulting CORE framework (short for ``Causal knOwledge REasoning'') is comprised of four essential components: question answering, causal reasoning, belief scoring, and refinement. Together, they synergistically improve the KQA system by fostering faithful reasoning and introspection. Extensive experiments are conducted on ScienceQA and HotpotQA datasets, which demonstrate the effectiveness and rationality of the CORE framework.
Causal-driven Large Language Models with Faithful Reasoning for Knowledge Question Answering
[ "Jiawei Wang", "Da Cao", "Shaofei Lu", "Zhanchang Ma", "Junbin Xiao", "Tat-Seng Chua" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=crJyt9Cign
@inproceedings{ pei2024emotion, title={Emotion Recognition in {HMD}s: A Multi-task Approach Using Physiological Signals and Occluded Faces}, author={Yunqiang Pei and Jialei Tang and Qihang Tang and Mingfeng Zha and Dongyu Xie and Guoqing Wang and Zhitao Liu and Ning Xie and Peng Wang and Yang Yang and Heng Tao Shen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=crJyt9Cign} }
Prior research on emotion recognition in extended reality (XR) has faced challenges due to the occlusion of facial expressions by Head-Mounted Displays (HMDs). This limitation hinders accurate Facial Expression Recognition (FER), which is crucial for immersive user experiences. This study aims to overcome the occlusion challenge by integrating physiological signals with partially visible facial expressions to enhance emotion recognition in XR environments. We employed a multi-task approach, utilizing a feature-level fusion to fuse Electroencephalography (EEG) and Galvanic Skin Response (GSR) signals with occluded facial expressions. The model predicts valence and arousal simultaneously from both macro-and micro-expression. Our method demonstrated improved accuracy in emotion recognition under partial occlusion conditions. The integration of temporal physiological signals with other modalities significantly enhanced performance, particularly for half-face emotion recognition. The study presents a novel approach to emotion recognition in XR, addressing the limitations of facial occlusion by HMDs. The findings suggest that physiological signals are vital for interpreting emotions in occluded scenarios, offering potential for real-time applications and advancing social XR applications.
Emotion Recognition in HMDs: A Multi-task Approach Using Physiological Signals and Occluded Faces
[ "Yunqiang Pei", "Jialei Tang", "Qihang Tang", "Mingfeng Zha", "Dongyu Xie", "Guoqing Wang", "Zhitao Liu", "Ning Xie", "Peng Wang", "Yang Yang", "Heng Tao Shen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=cpzRtSVq5F
@inproceedings{ yi2024multimodal, title={Multimodal Fusion via Hypergraph Autoencoder and Contrastive Learning for Emotion Recognition in Conversation}, author={Zijian Yi and Ziming Zhao and Zhishu Shen and Tiehua Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=cpzRtSVq5F} }
Multimodal emotion recognition in conversation (MERC) seeks to identify the speakers' emotions expressed in each utterance, offering significant potential across diverse fields. The challenge of MERC lies in balancing speaker modeling and context modeling, encompassing both long-distance and short-distance contexts, as well as addressing the complexity of multimodal information fusion. Recent research adopts graph-based methods to model intricate conversational relationships effectively. Nevertheless, the majority of these methods utilize a fixed fully connected structure to link all utterances, relying on convolution to interpret complex context. This approach can inherently heighten the redundancy in contextual messages and excessive graph network smoothing, particularly in the context of long-distance conversations. To address this issue, we propose a framework that dynamically adjusts hypergraph connections by variational hypergraph autoencoder (VHGAE), and employs contrastive learning to mitigate uncertainty factors during the reconstruction process. Experimental results demonstrate the effectiveness of our proposal against the state-of-the-art methods on IEMOCAP and MELD datasets. We release the code to support the reproducibility of this work (currently it is uploaded as the "complementary material" within the review system and will be made public following the completion of the review process).
Multimodal Fusion via Hypergraph Autoencoder and Contrastive Learning for Emotion Recognition in Conversation
[ "Zijian Yi", "Ziming Zhao", "Zhishu Shen", "Tiehua Zhang" ]
Conference
poster
2408.00970
[ "https://github.com/yzjred/-haucl" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=cpuUuie5yH
@inproceedings{ sun2024distribution, title={Distribution Consistency Guided Hashing for Cross-Modal Retrieval}, author={Yuan Sun and Kaiming Liu and Yongxiang Li and Zhenwen Ren and Jian Dai and Dezhong Peng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=cpuUuie5yH} }
With the massive emergence of multi-modal data, cross-modal retrieval (CMR) has become one of the hot topics. Thanks to fast retrieval and efficient storage, cross-modal hashing (CMH) provides a feasible solution for large-scale multi-modal data. Previous CMH methods always directly learn common hash codes to fuse different modalities. Although they have obtained some success, there are still some limitations: 1) These approaches often prioritize reducing the heterogeneity in multi-modal data by learning consensus hash codes, yet they may sacrifice specific information unique to each modality. 2) They frequently utilize pairwise similarities to guide hashing learning, neglecting class distribution correlations, which do not notably contribute to reducing differences among various modalities. To overcome these two issues, we propose a novel Distribution Consistency Guided Hashing (DCGH) framework. Specifically, we first learn the modality-specific representation to extract the private discriminative information. Further, we learn consensus hash codes from the private representation by consensus hashing learning, thereby merging the specifics with consistency. Finally, we propose distribution consistency learning to guide hash codes following a similar class distribution principle between multi-modal data, thereby exploring more consistent information. Lots of experimental results on four benchmark datasets demonstrate the effectiveness of our DCGH on both fully paired and partially paired CMR tasks.
Distribution Consistency Guided Hashing for Cross-Modal Retrieval
[ "Yuan Sun", "Kaiming Liu", "Yongxiang Li", "Zhenwen Ren", "Jian Dai", "Dezhong Peng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=cog8wDX6bu
@inproceedings{ zhang2024rateaware, title={Rate-aware Compression for Ne{RF}-based Volumetric Video}, author={Zhiyu Zhang and Guo Lu and Huanxiong Liang and Zhengxue Cheng and Anni Tang and Li Song}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=cog8wDX6bu} }
The neural radiance fields (NeRF) have advanced the development of 3D volumetric video technology, but the large data volumes they involve pose significant challenges for storage and transmission. To address these problems, the existing solutions typically compress these NeRF representations after the training stage, leading to a separation between representation training and compression. In this paper, we try to directly learn a compact NeRF representation for volumetric video in the training stage based on the proposed rate-aware compression framework. Specifically, for volumetric video, we use a simple yet effective modeling strategy to reduce temporal redundancy for the NeRF representation. Then, during the training phase, an implicit entropy model is utilized to estimate the bitrate of the NeRF representation. This entropy model is then encoded into the bitstream to assist in the decoding of the NeRF representation. This approach enables precise bitrate estimation, thereby leading to a compact NeRF representation. Furthermore, we propose an adaptive quantization strategy and learn the optimal quantization step for the NeRF representations. Finally, the NeRF representation can be optimized by using the rate-distortion trade-off. Our proposed compression framework can be used for different representations and experimental results demonstrate that our approach significantly reduces the storage size with marginal distortion and achieves state-of-the-art rate-distortion performance for volumetric video on the HumanRF and ReRF datasets. Compared to the previous state-of-the-art method TeTriRF, we achieved an approximately -80\% BD-rate on the HumanRF dataset and -60\% BD-rate on the ReRF dataset.
Rate-aware Compression for NeRF-based Volumetric Video
[ "Zhiyu Zhang", "Guo Lu", "Huanxiong Liang", "Zhengxue Cheng", "Anni Tang", "Li Song" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=cmfb8BWCx9
@inproceedings{ shen2024epluflsid, title={{EPL}-{UFLSID}: Efficient Pseudo Labels-Driven Underwater Forward-Looking Sonar Images Object Detection}, author={Cheng Shen and Liquan Shen and Mengyao Li and Meng Yu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=cmfb8BWCx9} }
Sonar imaging is widely utilized in submarine and underwater detection missions. However, due to the complex underwater environment, sonar images suffer from complex distortions and noises, making detection models hard to extract clean high-level features for detection. Existing works introduce denoised images as pseudo labels to assist the network to extract clean features while not fully considering the rationality of pseudo labels. To this end, we propose an Efficient Pseudo Labels-Driven Underwater Forward-looking Sonar Images Object Detection algorithm (EPL-UFLSID). Specifically, we first design a Gaussian Mixture Model based Deep Image Prior (GMMDIP) network to generate denoised sonar images by setting the GMM distribution as its input. After that, to filter the most detection-friendly images of the denoised images generated by GMMDIP as efficient pseudo labels, Detection-Friendly Image Quality Assessment network (DFIQA), is designed, which is also able to help EPL-UFLSID further distill cleaner features from pseudo labels to improve detection performance. Extensive experimental results show that our EPL-UFLSID reaches average precision (AP) of 67.8\%/39.8\% and average recall (AR) of 73.7\%/49.6\% on two real sonar datasets, which outperforms SOTA underwater forward-looking sonar images object detection algorithms.
EPL-UFLSID: Efficient Pseudo Labels-Driven Underwater Forward-Looking Sonar Images Object Detection
[ "Cheng Shen", "Liquan Shen", "Mengyao Li", "Meng Yu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=clKjOYw22x
@inproceedings{ gou2024interpretable, title={Interpretable Matching of Optical-{SAR} Image via Dynamically Conditioned Diffusion Models}, author={Shuiping Gou and Xin Wang and Xinlin Wang and Yunzhi Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=clKjOYw22x} }
Driven by the complementary information fusion of optical and synthetic aperture radar (SAR) images, the optical-SAR image matching has drawn much attention. However, the significant radiometric differences between them imposes great challenges on accurate matching. Most existing approaches convert SAR and optical images into a shared feature space to perform the matching, but these methods often fail to achieve the robust matching since the feature spaces are unknown and uninterpretable. Motivated by the interpretable latent space of diffusion models, this paper formulates an optical-SAR image translation and matching framework via a dynamically conditioned diffusion model (DCDM) to achieve the interpretable and robust optical-SAR cross-modal image matching. Specifically, in the denoising process, to filter out outlier matching regions, a gated dynamic sparse cross-attention module is proposed to facilitate efficient and effective long-range interactions of multi-grained features between the cross-modal data. In addition, a spatial position consistency constraint is designed to promote the cross-attention features to perceive the spatial corresponding relation in different modalities, improving the matching precision. Experimental results demonstrate that the proposed method outperforms state-of-the-art methods in terms of both the matching accuracy and the interpretability.
Interpretable Matching of Optical-SAR Image via Dynamically Conditioned Diffusion Models
[ "Shuiping Gou", "Xin Wang", "Xinlin Wang", "Yunzhi Chen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ckYzh7hdp3
@inproceedings{ pan2024harmonicnerf, title={HarmonicNe{RF}: Geometry-Informed Synthetic View Augmentation for 3D Scene Reconstruction in Driving Scenarios}, author={Xiaochao Pan and Jiawei Yao and Hongrui Kou and Tong Wu and Canran Xiao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ckYzh7hdp3} }
In the realm of autonomous driving, achieving precise 3D reconstruction of the driving environment is critical for ensuring safety and effective navigation. Neural Radiance Fields (NeRF) have shown promise in creating highly detailed and accurate models of complex environments. However, the application of NeRF in autonomous driving scenarios encounters several challenges, primarily due to the sparsity of viewpoints inherent in camera trajectories and the constraints on data collection in unbounded outdoor scenes, which typically occur along predetermined paths. This limitation not only reduces the available scene information but also poses significant challenges for NeRF training, as the sparse and path-distributed observational data leads to under-representation of the scene's geometry. In this paper, we introduce HarmonicNeRF, a novel approach for outdoor self-supervised monocular scene reconstruction. HarmonicNeRF capitalizes on the strengths of NeRF and enhances surface reconstruction accuracy by augmenting the input space with geometry-informed synthetic views. This is achieved through the application of spherical harmonics to generate novel radiance values, taking into careful consideration the color observations from the limited available real-world views. Additionally, our method incorporates proxy geometry to effectively manage occlusion, generating radiance pseudo-labels that circumvent the limitations of traditional image-warping techniques, which often fail in sparse data conditions typical of autonomous driving environments. Extensive experiments conducted on the KITTI, Argoverse, and NuScenes datasets demonstrate our approach establishes new benchmarks in synthesizing novel depth views and reconstructing scenes, significantly outperforming existing methods. Project page: https://github.com/Jiawei-Yao0812/HarmonicNeRF
HarmonicNeRF: Geometry-Informed Synthetic View Augmentation for 3D Scene Reconstruction in Driving Scenarios
[ "Xiaochao Pan", "Jiawei Yao", "Hongrui Kou", "Tong Wu", "Canran Xiao" ]
Conference
poster
2310.05483
[ "https://github.com/jiawei-yao0812/harmonicnerf" ]
https://huggingface.co/papers/2310.05483
0
0
0
4
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=cQLG7HQ5hH
@inproceedings{ yanagi2024dqg, title={{DQG}: Database Question Generation for Exact Text-based Image Retrieval}, author={Rintaro Yanagi and Ren Togo and Takahiro Ogawa and Miki Haseyama}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=cQLG7HQ5hH} }
Screening similar but non-target images in text-based image retrieval is crucial for pinpointing the user's desired images accurately. However, conventional methods mainly focus on enhancing text-image matching performance, often failing to identify images that exactly match the retrieval intention because of the query quality. User-provided queries frequently lack adequate information for screening similar but not target images, especially when the target database (DB) contains numerous similar images. Therefore, a novel approach is needed to extract valuable information from users for effective screening. In this paper, we propose a DB question generation (DQG) model to enhance exact cross-modal image retrieval performance. Our DQG model learns to generate effective questions that precisely screen similar but non-target images using DB contents information. By answering the questions generated from our model, users can reach their desired images by only answering the presented questions even within DBs with similar content. Experimental results on publicly available datasets show that our proposed approach can significantly improve exact cross-modal image retrieval performance. Code is available in the supplemental materials and will be publicly available.
DQG: Database Question Generation for Exact Text-based Image Retrieval
[ "Rintaro Yanagi", "Ren Togo", "Takahiro Ogawa", "Miki Haseyama" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=cLmJmXZZmw
@inproceedings{ zhang2024improving, title={Improving the Training of the {GAN}s with Limited Data via Dual Adaptive Noise Injection}, author={Zhaoyu Zhang and Yang Hua and Guanxiong Sun and Hui Wang and Se{\'a}n F. McLoone}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=cLmJmXZZmw} }
Recently, many studies have highlighted that training Generative Adversarial Networks (GANs) with limited data suffers from the overfitting of the discriminator ($D$). Existing studies mitigate the overfitting of $D$ by employing data augmentation, model regularization, or pre-trained models. Despite the success of existing methods in training GANs with limited data, noise injection is another plausible, complementary, yet not well-explored approach to alleviate the overfitting of $D$ issue. In this paper, we propose a simple yet effective method called Dual Adaptive Noise Injection (DANI), to further improve the training of GANs with limited data. Specifically, DANI consists of two adaptive strategies: adaptive injection probability and adaptive noise strength. For the adaptive injection probability, Gaussian noise is injected into both real and fake images for generator ($G$) and $D$ with a probability $p$, respectively, where the probability $p$ is controlled by the overfitting degree of $D$. For the adaptive noise strength, the Gaussian noise is produced by applying the adaptive forward diffusion process to both real and fake images, respectively. As a result, DANI can effectively increase the overlap between the distributions of real and fake data during training, thus alleviating the overfitting of $D$ issue. Extensive experiments on several commonly-used datasets with both StyleGAN2 and FastGAN backbones demonstrate that DANI can further improve the training of GANs with limited data and achieve state-of-the-art results compared with other methods.
Improving the Training of the GANs with Limited Data via Dual Adaptive Noise Injection
[ "Zhaoyu Zhang", "Yang Hua", "Guanxiong Sun", "Hui Wang", "Seán F. McLoone" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=c9I9NAbkpz
@inproceedings{ dai2024trga, title={TrGa: Reconsidering the Application of Graph Neural Networks in Two-View Correspondence Pruning}, author={Luanyuan Dai and Xiaoyu Du and Jinhui Tang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=c9I9NAbkpz} }
Two-view correspondence pruning aims to accurately remove incorrect correspondences (outliers) from initial ones. Graph Neural Networks (GNNs) incorporated by Multilayer Perceptrons (MLPs) are treated as a powerful manner to handle sparse and unevenly distributed data. However, the expression capability of correspondence features obtained by MLPs is limited by their inherent insufficient of context information. In addition, previous works directly utilize the outputs of off-the-shelf GNNs, thus leading to confusion between sparse correspondence attribute features and their global structural information. To alleviate these issues, we propose a two-view correspondence pruning network TrGa. Specifically, we firstly use complete Transformer structures instead of context-agnostic MLPs to capture correspondence features with global context information and stronger expression capability. After that, we introduce the Concatenation Graph Node and Global Structure (CGNS) block to separately capture the interaction patterns among sparse correspondence attribute features and the global structural information among them, which can prevent their confusion. Finally, the proposed Feature Dimension Transformation and Enhancement (FDTE) block is applied for dimension transformation and feature augmentation. Additionally, we propose an efficient variant C-TrGa, in which the similarity matrix of the proposed C-Transformer is computed along the channel dimension. Extensive experiments demonstrate that the proposed TrGa and C-TrGa outperform state-of-the-art methods in different computer vision tasks. The code is provided in the supplementary materials.
TrGa: Reconsidering the Application of Graph Neural Networks in Two-View Correspondence Pruning
[ "Luanyuan Dai", "Xiaoyu Du", "Jinhui Tang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=c4XZpv5Fdh
@inproceedings{ li2024boosting, title={Boosting Audio Visual Question Answering via Key Semantic-Aware Cues}, author={Guangyao Li and HenghuiDu and Di Hu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=c4XZpv5Fdh} }
The Audio Visual Question Answering (AVQA) task aims to answer questions related to various visual objects, sounds, and their interactions in videos. Such naturally multimodal videos contain rich and complex dynamic audio-visual components, with only a portion of them closely related to the given questions. Hence, effectively perceiving audio-visual cues relevant to the given questions is crucial for correctly answering them. In this paper, we propose a Temporal-Spatial Perception Model (TSPM), which aims to empower the model to perceive key visual and auditory cues related to the questions. Specifically, considering the challenge of aligning non-declarative questions and visual representations into the same semantic space using visual-language pretrained models, we construct declarative sentence prompts derived from the question template, to assist the temporal perception module in better identifying critical segments relevant to the questions. Subsequently, a spatial perception module is designed to merge visual tokens from selected segments to highlight key latent targets, followed by cross-modal interaction with audio to perceive potential sound-aware areas.Finally, the significant temporal-spatial cues from these modules are integrated for answering the question. Extensive experiments on multiple AVQA benchmarks demonstrate that our framework excels not only in understanding audio-visual scenes but also answering complex questions effectively.
Boosting Audio Visual Question Answering via Key Semantic-Aware Cues
[ "Guangyao Li", "HenghuiDu", "Di Hu" ]
Conference
poster
2407.20693
[ "https://github.com/gewu-lab/tspm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=c045hdQdFX
@inproceedings{ luo2024panosent, title={PanoSent: A Panoptic Sextuple Extraction Benchmark for Multimodal Conversational Aspect-based Sentiment Analysis}, author={Meng Luo and Hao Fei and Bobo Li and Shengqiong Wu and Qian Liu and Soujanya Poria and Erik Cambria and Mong-Li Lee and Wynne Hsu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=c045hdQdFX} }
While existing Aspect-based Sentiment Analysis (ABSA) has received extensive effort and advancement, there are still gaps in defining a more holistic research target seamlessly integrating multimodality, conversation context, fine-granularity, and also covering the changing sentiment dynamics as well as cognitive causal rationales. This paper bridges the gaps by introducing a multimodal conversational ABSA, where two novel subtasks are proposed: 1) Panoptic Sentiment Sextuple Extraction, panoramically recognizing holder, target, aspect, opinion, sentiment, rationale from multi-turn multi-party multimodal dialogue. 2) Sentiment Flipping Analysis, detecting the dynamic sentiment transformation throughout the conversation with the causal reasons. To benchmark the tasks, we construct PanoSent, a dataset annotated both manually and automatically, featuring high quality, large scale (10,000 dialogues), multimodality (text, image, audio and video), multilingualism (English, Chinese and Spanish), multi-scenarios (over 100 domains), and covering both implicit&explicit sentiment elements. Further, to effectively address the tasks, we devise a novel Chain-of-Sentiment reasoning framework, together with a novel multimodal large language model (namely Sentica) and a paraphrase-based verification mechanism. Extensive evaluations demonstrate the superiority of our methods over strong baselines, validating the efficacy of all our proposed methods. The work is expected to open up a new era for the ABSA community, and thus all our codes and data are open at https://PanoSent.github.io/.
PanoSent: A Panoptic Sextuple Extraction Benchmark for Multimodal Conversational Aspect-based Sentiment Analysis
[ "Meng Luo", "Hao Fei", "Bobo Li", "Shengqiong Wu", "Qian Liu", "Soujanya Poria", "Erik Cambria", "Mong-Li Lee", "Wynne Hsu" ]
Conference
oral
2408.09481
[ "" ]
https://huggingface.co/papers/2408.09481
1
1
0
9
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=bzSbhuoRfr
@inproceedings{ ding2024masked, title={Masked Snake Attention for Fundus Image Restoration with Vessel Preservation}, author={Xiaohuan Ding and Gong Yangrui and Tianyi Shi and Zihang Huang and Gangwei Xu and Xin Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=bzSbhuoRfr} }
Restoring low-quality fundus images, especially the recovery of vessel structures, is crucial for clinical observation and diagnosis. Existing state-of-the-art methods use standard convolution and window based self-attention block to recover low-quality fundus images, but these feature capturing approaches do not effectively match the slender and tortuous structure of retinal vessels. Therefore, these methods struggle to accurately restore vessel structures. To overcome this challenge, we propose a novel low-quality fundus image restoration method called Masked Snake Attention Network (MSANet). It is designed specifically for accurately restoring vessel structures. Specifically, we introduce the Snake Attention module (SA) to adaptively aggregate vessel features based on the morphological structure of the vessels. Due to the small proportion of vessel pixels in the image, we further present the Masked Snake Attention module (MSA) to more efficiently capture vessel features. MSA enhances vessel features by constraining snake attention within regions predicted by segmentation methods. Extensive experimental results demonstrate that our MSANet outperforms the state-of-the-art methods in enhancement evaluation and downstream segmentation tasks.
Masked Snake Attention for Fundus Image Restoration with Vessel Preservation
[ "Xiaohuan Ding", "Gong Yangrui", "Tianyi Shi", "Zihang Huang", "Gangwei Xu", "Xin Yang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=bw3xRlHRtC
@inproceedings{ chen2024find, title={{FIND}: Fine-tuning Initial Noise Distribution with Policy Optimization for Diffusion Models}, author={Changgu Chen and Libing Yang and Xiaoyan Yang and Lianggangxu Chen and Gaoqi He and Changbo Wang and Yang Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=bw3xRlHRtC} }
In recent years, large-scale pre-trained diffusion models have demonstrated their outstanding capabilities in image and video generation tasks. However, existing models tend to produce visual objects commonly found in the training dataset, which diverges from user input prompts. The underlying reason behind the inaccurate generated results lies in the model's difficulty in sampling from specific intervals of the initial noise distribution corresponding to the prompt. Moreover, it is challenging to directly optimize the initial distribution, given that the diffusion process involves multiple denoising steps. In this paper, we introduce a Fine-tuning Initial Noise Distribution (FIND) framework with policy optimization, which unleashes the powerful potential of pre-trained diffusion networks by directly optimizing the initial distribution to align the generated contents with user-input prompts. To this end, we first reformulate the diffusion denoising procedure as a one-step Markov decision process and employ policy optimization to directly optimize the initial distribution. In addition, a dynamic reward calibration module is proposed to ensure training stability during optimization. Furthermore, we introduce a ratio clipping algorithm to utilize historical data for network training and prevent the optimized distribution from deviating too far from the original policy to restrain excessive optimization magnitudes. Extensive experiments demonstrate the effectiveness of our method in both text-to-image and text-to-video tasks, surpassing SOTA methods in achieving consistency between prompts and the generated content. Our method achieves 10 times faster than the SOTA approach.
FIND: Fine-tuning Initial Noise Distribution with Policy Optimization for Diffusion Models
[ "Changgu Chen", "Libing Yang", "Xiaoyan Yang", "Lianggangxu Chen", "Gaoqi He", "Changbo Wang", "Yang Li" ]
Conference
poster
2407.19453
[ "https://github.com/vpx-ecnu/find-website" ]
https://huggingface.co/papers/2407.19453
0
0
0
7
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=bpasC0JvYP
@inproceedings{ zhang2024mixed, title={Mixed Prototype Correction for Causal Inference in Medical Image Classification}, author={Yajie Zhang and Zhi-An Huang and Zhiliang Hong and Songsong Wu and Jibin Wu and KC Tan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=bpasC0JvYP} }
The heterogeneity of medical images poses significant challenges to accurate disease diagnosis. To tackle this issue, the impact of such heterogeneity on the causal relationship between image features and diagnostic labels should be incorporated into model design, which however remains underexplored. In this paper, we propose a mixed prototype correction for causal inference (MPCCI) method, aimed at mitigating the impact of unseen confounding factors on the causal relationships between medical images and disease labels, so as to enhance the diagnostic accuracy of deep learning models. The MPCCI comprises a causal inference component based on front-door adjustment and an adaptive training strategy. The causal inference component employs a multi-view feature extraction (MVFE) module to establish mediators, and a mixed prototype correction (MPC) module to execute causal interventions. Moreover, the adaptive training strategy incorporates both information purity and maturity metrics to maintain stable model training. Experimental evaluations on four medical image datasets, encompassing CT and ultrasound modalities, demonstrate the superior diagnostic accuracy and reliability of the proposed MPCCI. The code will be available at https://github.com/Yajie-Zhang/MPCCI.
Mixed Prototype Correction for Causal Inference in Medical Image Classification
[ "Yajie Zhang", "Zhi-An Huang", "Zhiliang Hong", "Songsong Wu", "Jibin Wu", "KC Tan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=bgRADWSr6O
@inproceedings{ yan2024d, title={4D Gaussian Splatting with Scale-aware Residual Field and Adaptive Optimization for Real-time rendering of temporally complex dynamic scenes}, author={Jinbo Yan and Rui Peng and Luyang Tang and Ronggang Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=bgRADWSr6O} }
Reconstructing dynamic scenes from video sequences is a highly promising task in the multimedia domain. While previous methods have made progress, they often struggle with slow rendering and managing temporal complexities such as significant motion and object appearance/disappearance. In this paper, we propose SaRO-GS as a novel dynamic scene representation capable of achieving real-time rendering while effectively handling temporal complexities in dynamic scenes. To address the issue of slow rendering speed, we adopt a Gaussian primitive-based representation and optimize the Gaussians in 4D space, which facilitates real-time rendering with the assistance of 3D Gaussian Splatting. Additionally, to handle temporally complex dynamic scenes, we introduce a Scale-aware Residual Field. This field considers the size information of each Gaussian primitive while encoding its residual feature and aligns with the self-splitting behavior of Gaussian primitives. Furthermore, we propose an Adaptive Optimization Schedule, which assigns different optimization strategies to Gaussian primitives based on their distinct temporal properties, thereby expediting the reconstruction of dynamic regions. Through evaluations on monocular and multi-view datasets, our method has demonstrated state-of-the-art performance.
4D Gaussian Splatting with Scale-aware Residual Field and Adaptive Optimization for Real-time rendering of temporally complex dynamic scenes
[ "Jinbo Yan", "Rui Peng", "Luyang Tang", "Ronggang Wang" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=bgKpTAN8qd
@inproceedings{ qin2024hssurf, title={{HS}-Surf: A Novel High-Frequency Surface Shell Radiance Field to Improve Large-Scale Scene Rendering}, author={Jiongming Qin and Fei LUO and Tuo Cao and Wenju Xu and Chunxia Xiao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=bgKpTAN8qd} }
Prior neural radiance fields often struggle to preserve high-frequency textures in urban and aerial large-scale scenes due to insufficient model capacity on the scene surface. This is attributed to their sampling locations or grid vertices falling in empty areas. Additionally, most models do not consider the drastic changes in distances. To address these issues, we propose a novel high-frequency surface shell radiance field, which uses depth-guided information to create a shell enveloping the scene surface under the current view, and then samples conic frustums on this shell to render high-frequency textures. Specifically, our method comprises three parts. Initially, we propose a strategy to fuse voxel grids and information of distance scales to generate a coarse scene at different distance scales. Subsequently, we construct a shell based on the depth information to carry out compensation to incorporate texture details not captured by voxels. Finally, the smooth and denoise post-processing further improves the rendering quality. Substantial scene experiments and ablation experiments demonstrate that our method achieves the obvious improvement of high-frequency textures at different distance scales and outperforms the state-of-the-art methods.
HS-Surf: A Novel High-Frequency Surface Shell Radiance Field to Improve Large-Scale Scene Rendering
[ "Jiongming Qin", "Fei LUO", "Tuo Cao", "Wenju Xu", "Chunxia Xiao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=bb0M2WYeng
@inproceedings{ lu2024fuse, title={Fuse Your Latents: Video Editing with Multi-source Latent Diffusion Models}, author={Tianyi Lu and Xing Zhang and Jiaxi Gu and Hang Xu and Renjing Pei and Songcen Xu and Xingjun Ma and Zuxuan Wu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=bb0M2WYeng} }
Latent Diffusion Models (LDMs) are renowned for their powerful capabilities in image and video synthesis. Yet, compared to text-to-image (T2I) editing, text-to-video (T2V) editing suffers from a lack of decent temporal consistency and structure, due to insufficient pre-training data, limited model editability, or extensive tuning costs. To address this gap, we propose FLDM (Fused Latent Diffusion Model), a training-free framework that achieves high-quality T2V editing by integrating various T2I and T2V LDMs. Specifically, FLDM utilizes a hyper-parameter with an update schedule to effectively fuse image and video latents during the denoising process. This paper is the first to reveal that T2I and T2V LDMs can complement each other in terms of structure and temporal consistency, ultimately generating high-quality videos. It is worth noting that FLDM can serve as a versatile plugin, applicable to off-the-shelf image and video LDMs, to significantly enhance the quality of video editing. Extensive quantitative and qualitative experiments on popular T2I and T2V LDMs demonstrate FLDM's superior editing quality than state-of-the-art T2V editing methods.
Fuse Your Latents: Video Editing with Multi-source Latent Diffusion Models
[ "Tianyi Lu", "Xing Zhang", "Jiaxi Gu", "Hang Xu", "Renjing Pei", "Songcen Xu", "Xingjun Ma", "Zuxuan Wu" ]
Conference
poster
2310.16400
[ "https://github.com/lutianyi0603/fuse_your_latents" ]
https://huggingface.co/papers/2310.16400
0
0
0
7
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=bXhz5c12Ee
@inproceedings{ zhang2024trainingfree, title={Training-Free Feature Reconstruction with Sparse Optimization for Vision-Language Models}, author={Yi Zhang and Ke Yu and Angelica I Aviles-Rivero and Jiyuan Jia and Yushun Tang and Zhihai He}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=bXhz5c12Ee} }
In this paper, we address the challenge of adapting vision-language models (VLMs) to few-shot image recognition in a training-free manner. We observe that existing methods are not able to effectively characterize the semantic relationship between support and query samples in a training-free setting. We recognize that, in the semantic feature space, the feature of the query image is a linear and sparse combination of support image features since support-query pairs are from the class and share the same small set of distinctive visual attributes. Motivated by this interesting observation, we propose a novel method called Training-free Feature ReConstruction with Sparse optimization (TaCo), which formulates the few-shot image recognition task as a feature reconstruction and sparse optimization problem. Specifically, we exploit the VLM to encode the query and support images into features. We utilize sparse optimization to reconstruct the query feature from the corresponding support features. The feature reconstruction error is then used to define the reconstruction similarity. Coupled with the text-image similarity provided by the VLM, our reconstruction similarity analysis accurately characterizes the relationship between support and query images. This results in significantly improved performance in few-shot image recognition. Our extensive experimental results on few-shot recognition demonstrate that the proposed method outperforms existing state-of-the-art approaches by substantial margins.
Training-Free Feature Reconstruction with Sparse Optimization for Vision-Language Models
[ "Yi Zhang", "Ke Yu", "Angelica I Aviles-Rivero", "Jiyuan Jia", "Yushun Tang", "Zhihai He" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=bTxK85v2uO
@inproceedings{ wang2024a, title={A Simple and Provable Approach for Learning on Noisy Labeled Multi-modal Medical Images}, author={Nan Wang and Zonglin Di and Houlin He and Qingchao Jiang and Xiaoxiao Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=bTxK85v2uO} }
Deep learning for medical image classification needs large amounts of carefully labeled data with the aid of domain experts. However, data labeling is vulnerable to noises, which may degrade the ac curacy of classifiers. Given the cost of medical data collection and annotation, it is highly desirable for methods that can effectively utilize noisy labeled data. In addition, efficiency and universality are essential for noisy label training, which requires further research. To address the lack of high-quality labeled medical data and meet algorithm efficiency requirements for clinical application, we propose a simple yet effective approach for multi-field medical images to utilize noisy data, named Pseudo-T correction. Specifically, we design a noisy label filter to divide the training data into clean and noisy samples. Then, we estimate a transition matrix that corrects model predictions based on the partitions of clean and noisy data samples. However, if the model overfits noisy data, noisy samples become more difficult to detect in the filtering step, resulting in inaccurate transition matrix estimation. Therefore, we employ gradient disparity as an effective criterion to decide whether or not to refine the transition matrix in the model’s further training steps. The novel design enables us to build more accurate machine-learning models by leveraging noisy labels. We demonstrate that our method outperforms the state-of-the-art methods on three public medical datasets and achieves superior computational efficiency over the alternatives.
A Simple and Provable Approach for Learning on Noisy Labeled Multi-modal Medical Images
[ "Nan Wang", "Zonglin Di", "Houlin He", "Qingchao Jiang", "Xiaoxiao Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=bRYbhQLYx3
@inproceedings{ sheng2024enhancing, title={Enhancing Robustness in Learning with Noisy Labels: An Asymmetric Co-Training Approach}, author={Mengmeng Sheng and Zeren Sun and Gensheng Pei and Tao Chen and Haonan Luo and Yazhou Yao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=bRYbhQLYx3} }
Label noise, an inevitable issue in various real-world datasets, tends to impair the performance of deep neural networks. A large body of literature focuses on symmetric co-training, aiming to enhance model robustness by exploiting interactions between models with distinct capabilities. However, the symmetric training processes employed in existing methods often culminate in model consensus, diminishing their efficacy in handling noisy labels. To this end, we propose an Asymmetric Co-Training (ACT) method to mitigate the detrimental effects of label noise. Specifically, we introduce an asymmetric training framework in which one model (i.e., RTM) is robustly trained with a selected subset of clean samples while the other (i.e., NTM) is conventionally trained using the entire training set. We propose two novel criteria based on agreement and discrepancy between models, establishing asymmetric sample selection and mining. Moreover, a metric, derived from the divergence between models, is devised to quantify label memorization, guiding our method in determining the optimal stopping point for sample mining. Finally, we propose to dynamically re-weight identified clean samples according to their reliability inferred from historical information. We additionally employ consistency regularization to achieve further performance improvement. Extensive experimental results on synthetic and real-world datasets demonstrate the effectiveness and superiority of our method.
Enhancing Robustness in Learning with Noisy Labels: An Asymmetric Co-Training Approach
[ "Mengmeng Sheng", "Zeren Sun", "Gensheng Pei", "Tao Chen", "Haonan Luo", "Yazhou Yao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=bPu5rKozS7
@inproceedings{ li2024towards, title={Towards Effective Data-Free Knowledge Distillation via Diverse Diffusion Augmentation}, author={Muquan Li and Dongyang Zhang and Tao He and Xiurui Xie and Yuan-Fang Li and Ke Qin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=bPu5rKozS7} }
Data-free knowledge distillation (DFKD) has emerged as a pivotal technique in the domain of model compression, substantially reducing the dependency on the original training data. Nonetheless, conventional DFKD methods that employ synthesized training data are prone to the limitations of inadequate diversity and discrepancies in distribution between the synthesized and original datasets. To address these challenges, this paper introduces an innovative approach to DFKD through diverse diffusion augmentation (DDA). Specifically, we revise the paradigm of common data synthesis in DFKD to a composite process through leveraging diffusion models subsequent to data synthesis for self-supervised augmentation, which generates a spectrum of data samples with similar distributions while retaining controlled variations. Furthermore, to mitigate excessive deviation in the embedding space, we introduce an image filtering technique grounded in cosine similarity to maintain fidelity during the knowledge distillation process. Comprehensive experiments conducted on CIFAR-10, CIFAR-100, and Tiny-ImageNet datasets showcase the superior performance of our method across various teacher-student network configurations, outperforming the contemporary state-of-the-art DFKD methods.
Towards Effective Data-Free Knowledge Distillation via Diverse Diffusion Augmentation
[ "Muquan Li", "Dongyang Zhang", "Tao He", "Xiurui Xie", "Yuan-Fang Li", "Ke Qin" ]
Conference
poster
2410.17606
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=bOjJGV2OCU
@inproceedings{ chen2024smart, title={{SMART}: Self-Weighted Multimodal Fusion for Diagnostics of Neurodegenerative Disorders}, author={qiuhui chen and Yi Hong}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=bOjJGV2OCU} }
Multimodal medical data, such as brain scans and non-imaging clinical records like demographics and neuropsychology examinations, play an important role in diagnosing neurodegenerative disorders, e.g., Alzheimer's disease (AD) and Parkinson's disease (PD). However, the disease-relevant information is overwhelmed by the high-dimensional image scans and the massive non-imaging data, making it a challenging task to fuse multimodal medical inputs efficiently. Recent multimodal learning methods adopt deep encoders to extract features and simple concatenation or alignment techniques for feature fusion, which suffer the representation degeneration issue due to the vast irrelevant information. To address this challenge, we propose a deep self-weighted multimodal relevance weighting approach, which leverages clustering-based constrastive learning and eliminates the intra- and inter-modal irrelevancy. The learned relevance score is integrated as a gate with a multimodal attention transformer to provide an improved fusion for the final diagnosis. Our proposed model, called SMART (Self-weighted Multimodal Attention-and-Relevance gated Transformer), is extensively evaluated on three public AD/PD datasets and achieves state-of-the-art (SOTA) performance in the diagnostics of neurodegenerative disorders. Our source code will be available.
SMART: Self-Weighted Multimodal Fusion for Diagnostics of Neurodegenerative Disorders
[ "qiuhui chen", "Yi Hong" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=bO5kEKkdUY
@inproceedings{ zhang2024micm, title={{MICM}: Rethinking Unsupervised Pretraining for Enhanced Few-shot Learning}, author={Zhenyu Zhang and Guangyao Chen and Yixiong Zou and Zhimeng Huang and Yuhua Li and Ruixuan Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=bO5kEKkdUY} }
Humans exhibit a remarkable ability to learn quickly from a limited number of labeled samples, a capability that starkly contrasts with that of current machine learning systems. Unsupervised Few-Shot Learning (U-FSL) seeks to bridge this divide by reducing reliance on annotated datasets during initial training phases. In this work, we first quantitatively assess the impacts of Masked Image Modeling (MIM) and Contrastive Learning (CL) on few-shot learning tasks. Our findings highlight the respective limitations of MIM and CL in terms of discriminative and generalization abilities, which contribute to their underperformance in U-FSL contexts. To address these trade-offs between generalization and discriminability in unsupervised pretraining, we introduce a novel paradigm named Masked Image Contrastive Modeling (MICM). MICM creatively combines the targeted object learning strength of CL with the generalized visual feature learning capability of MIM, significantly enhancing its efficacy in downstream few-shot learning inference. Extensive experimental analyses confirm the advantages of MICM, demonstrating significant improvements in both generalization and discrimination capabilities for few-shot learning. Our comprehensive quantitative evaluations further substantiate the superiority of MICM, showing that our two-stage U-FSL framework based on MICM markedly outperforms existing leading baselines. We provide the source code in the supplementary materials for reproducibility.
MICM: Rethinking Unsupervised Pretraining for Enhanced Few-shot Learning
[ "Zhenyu Zhang", "Guangyao Chen", "Yixiong Zou", "Zhimeng Huang", "Yuhua Li", "Ruixuan Li" ]
Conference
oral
2408.13385
[ "https://github.com/icgy96/micm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=bFXYw5fG8t
@inproceedings{ su2024ibmea, title={{IBMEA}: Exploring Variational Information Bottleneck for Multi-modal Entity Alignment}, author={Taoyu Su and Jiawei Sheng and Shicheng Wang and Xinghua Zhang and Hongbo Xu and Tingwen Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=bFXYw5fG8t} }
Multi-modal entity alignment (MMEA) aims to identify equivalent entities between multi-modal knowledge graphs (MMKGs), where entities can be associated with related images. Most existing studies rely heavily on the automatically learned multi-modal fusion modules, which may allow redundant information such as misleading clues in the generated entity representations, impeding the feature consistency of equivalent entities. To this end, we propose a variational framework for MMEA via information bottleneck, termed as IBMEA, by emphasizing alignment-relevant information while suppressing alignment-irrelevant information in entity representations. Specifically, we first develop multi-modal variational encoders that represent modal-specific features as probability distributions. Then, we propose four modal-specific information bottleneck regularizers to limit the misleading clues in the modal-specific entity representations. Finally, we propose a modal-hybrid information contrastive regularizer to integrate modal-specific representations and ensure the similarity of equivalent entities between MMKGs to achieve MMEA. We conduct extensive experiments on 2 cross-KG and 3 bilingual MMEA datasets. Experimental results demonstrate that our model consistently outperforms previous state-of-the-art methods, and also shows promising and robust performance especially in the low-resource and high-noise data scenarios.
IBMEA: Exploring Variational Information Bottleneck for Multi-modal Entity Alignment
[ "Taoyu Su", "Jiawei Sheng", "Shicheng Wang", "Xinghua Zhang", "Hongbo Xu", "Tingwen Liu" ]
Conference
poster
2407.19302
[ "https://github.com/sutaoyu/IBMEA" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=bDjEC9P0sl
@inproceedings{ liao2024freehand, title={Freehand Sketch Generation from Mechanical Components}, author={Zhichao Liao and Fengyuan Piao and Di Huang and Xinghui Li and Yue Ma and Pingfa Feng and Heming Fang and Long ZENG}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=bDjEC9P0sl} }
Drawing freehand sketches of mechanical components on multimedia devices for AI-based engineering modeling becomes a new trend. However, its development is being impeded because existing works cannot produce suitable sketches for data-driven research. These works either generate sketches lacking a freehand style or utilize generative models not originally designed for this task resulting in poor effectiveness. To address this issue, we design a two-stage generative framework mimicking the human sketching behavior pattern, called MSFormer, which is the first time to produce humanoid freehand sketches tailored for mechanical components. The first stage employs Open CASCADE technology to obtain multi-view contour sketches from mechanical components, filtering perturbing signals for the ensuing generation process. Meanwhile, we design a view selector to simulate viewpoint selection tasks during human sketching for picking out information-rich sketches. The second stage translates contour sketches into freehand sketches by a transformer-based generator. To retain essential modeling features as much as possible and rationalize stroke distribution, we introduce a novel edge-constraint stroke initialization. Furthermore, we utilize a CLIP vision encoder and a new loss function incorporating the Hausdorff distance to enhance the generalizability and robustness of the model. Extensive experiments demonstrate that our approach achieves state-of-the-art performance for generating freehand sketches in the mechanical domain.
Freehand Sketch Generation from Mechanical Components
[ "Zhichao Liao", "Fengyuan Piao", "Di Huang", "Xinghui Li", "Yue Ma", "Pingfa Feng", "Heming Fang", "Long ZENG" ]
Conference
poster
2408.05966
[ "https://github.com/di-huang/Freehand-Sketch-Generation-from-Mechanical-Components" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=b6VJWTQKLF
@inproceedings{ xiao2024mambatrack, title={MambaTrack: a simple baseline for multiple object tracking with State Space Model}, author={Changcheng Xiao and Qiong Cao and Zhigang Luo and Long Lan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=b6VJWTQKLF} }
Tracking by detection has been the prevailing paradigm in the field of Multi-object Tracking (MOT). These methods typically rely on the Kalman Filter to estimate the future locations of objects, assuming linear object motion. However, they fall short when tracking objects exhibiting nonlinear and diverse motion in scenarios like dancing and sports. In addition, there has been limited focus on utilizing learning-based motion predictors in MOT. To address these challenges, we resort to exploring data-driven motion prediction methods. Inspired by the great expectation of state space models (SSMs), such as Mamba, in long-term sequence modeling with near-linear complexity, we introduce a Mamba-based motion model named Mamba moTion Predictor (MTP). MTP is designed to model the complex motion patterns of objects like dancers and athletes. Specifically, MTP takes the spatial-temporal location dynamics of objects as input, captures the motion pattern using a bi-Mamba encoding layer, and predicts the next motion. In real-world scenarios, objects may be missed due to occlusion or motion blur, leading to premature termination of their trajectories. To tackle this challenge, we further expand the application of MTP. We employ it in an autoregressive way to compensate for missing observations by utilizing its own predictions as inputs, thereby contributing to more consistent trajectories. Our proposed tracker, MambaTrack, demonstrates advanced performance on benchmarks such as Dancetrack and SportsMOT, which are characterized by complex motion and severe occlusion.
MambaTrack: a simple baseline for multiple object tracking with State Space Model
[ "Changcheng Xiao", "Qiong Cao", "Zhigang Luo", "Long Lan" ]
Conference
oral
2408.09178
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ay1Tv3PcNw
@inproceedings{ jia2024convert, title={Convert and Speak: Zero-shot Accent Conversion with Minimum Supervision}, author={zhijun jia and Huaying Xue and Xiulian Peng and Yan Lu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ay1Tv3PcNw} }
Low resource of parallel data is the key challenge of accent conversion(AC) problem in which both the pronunciation units and prosody pattern need to be converted. We propose a two-stage generative framework "convert-and-speak" in which the conversion is only operated on the semantic token level and the speech is synthesized conditioned on the converted semantic token with a speech generative model in target accent domain. The decoupling design enables the "speaking" module to use massive amount of target accent speech and relieves the parallel data required for the "conversion" module. Conversion with the bridge of semantic token also relieves the requirement for the data with text transcriptions and unlocks the usage of language pre-training technology to further efficiently reduce the need of parallel accent speech data. To reduce the complexity and latency of "speaking", a single-stage AR generative model is designed to achieve good quality as well as lower computation cost. Experiments on Indian-English to general American-English conversion show that the proposed framework achieves state-of-the-art performance in accent similarity, speech quality, and speaker maintenance with only 15 minutes of weakly parallel data which is not constrained to the same speaker. Extensive experimentation with diverse accent types suggests that this framework possesses a high degree of adaptability, making it readily scalable to accommodate other accents with low-resource data. Audio samples are available at https://convert-and-speak.github.io/demo/
Convert and Speak: Zero-shot Accent Conversion with Minimum Supervision
[ "zhijun jia", "Huaying Xue", "Xiulian Peng", "Yan Lu" ]
Conference
poster
2408.10096
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=aspe8HE0ZA
@inproceedings{ xu2024identitydriven, title={Identity-Driven Multimedia Forgery Detection via Reference Assistance}, author={Junhao Xu and Jingjing Chen and Xue Song and Feng Han and Haijun Shan and Yu-Gang Jiang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=aspe8HE0ZA} }
Recent technological advancements, such as the "deepfake" techniques, have paved the way for generating various media forgeries. In response to the potential hazards of these media forgeries, many researchers engage in exploring detection methods, increasing the demand for high-quality media forgery datasets. Despite this, existing datasets have certain limitations. Firstly, most datasets focus on manipulating visual modality and usually lack diversity, as only a few forgery approaches are considered. Secondly, the quality of media is often inadequate in clarity and naturalness. Meanwhile, the size of the dataset is also limited. Thirdly, it is commonly observed that real-world forgeries are motivated by identity, yet the identity information of the individuals portrayed in these forgeries within existing datasets remains under-explored. For detection, identity information could be an essential clue to boost performance. Moreover, official media concerning relevant identities on the Internet can serve as prior knowledge, aiding both the audience and forgery detectors in determining the true identity. Therefore, we propose an identity-driven multimedia forgery dataset, IDForge, which contains 249,138 video shots. All video shots are sourced from 324 wild videos of 54 celebrities collected from the Internet. The fake video shots involve 9 types of manipulation across visual, audio, and textual modalities. Additionally, IDForge provides extra 214,438 real video shots as a reference set for the 54 celebrities. Correspondingly, we design an effective multimedia detection network termed the Reference-assisted Multimodal Forgery Detection Network (R-MFDN). Through extensive experiments on the proposed dataset, we demonstrate the effectiveness of R-MFDN on the multimedia detection task.
Identity-Driven Multimedia Forgery Detection via Reference Assistance
[ "Junhao Xu", "Jingjing Chen", "Xue Song", "Feng Han", "Haijun Shan", "Yu-Gang Jiang" ]
Conference
oral
2401.11764
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=adqDa13uXh
@inproceedings{ zhao2024coplparameterefficient, title={Co{PL}:Parameter-Efficient Collaborative Prompt Learning for Audio-Visual Tasks}, author={Yihan Zhao and Wei Xi and Cui Yuhang and Gairui Bai and Xinhui Liu and Jizhong Zhao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=adqDa13uXh} }
Parameter-Efficient Fine Tuning (PEFT) has been demonstrated to be effective and efficient for transferring foundation models to downstream tasks. Transferring pretrained uni-modal models to multi-modal downstream tasks helps alleviate substantial computational costs for retraining multi-modal models. However, existing approaches primarily focus on multi-modal fusion, while neglecting the modal-specific fine-tuning, which is also crucial for multi-modal tasks. To this end, we propose parameter-efficient $Co$llaborative $P$rompt $L$earning ($CoPL$) to fine-tune both uni-modal and multi-modal features. Specifically, the collaborative prompts consist of modal-specific prompts and modal-interaction prompts. The modal-specific prompts are tailored for fine-tuning each modality, while the modal-interaction prompts are customized to explore inter-modality association. Furthermore, prompt bank-based mutual coupling is introduced to extract instance-level features, further enhancing the model's generalization ability. Extensive experimental results demonstrate that our approach achieves comparable or higher performance on various audio-visual downstream tasks while utilizing approximately 1% extra trainable parameters.
CoPL:Parameter-Efficient Collaborative Prompt Learning for Audio-Visual Tasks
null
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=adbHOfcFOD
@inproceedings{ hu2024distilled, title={Distilled Cross-Combination Transformer for Image Captioning with Dual Refined Visual Features}, author={Junbo Hu and Zhixin Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=adbHOfcFOD} }
Transformer-based encoders that encode both region and grid features are the preferred choice for the image captioning task due to their multi-head self-attention mechanism. This mechanism ensures superior capture of relationships and contextual information between various regions in an image. However, because of the Transformer block stacking, self-attention computes the visual features several times, increasing computing costs and producing a great deal of redundant feature calculation. In this paper, we propose a novel Distilled Cross-Combination Transformer (DCCT) network. Specifically, we first design a distillation cascade fusion encoder(DCFE) to filter out redundant features in visual features that affect attentional focus, obtaining refined features. Additionally, we introduce a parallel cross-fusion attention module (PCFA) that fully utilizes the complementarity and correlation between grid and region features to better fuse the encoded dual visual features. Extensive experiments on the MSCOCO dataset demonstrate that the proposed DCCT strategy outperforms many state-of-the-art techniques and attains exceptional performance.
Distilled Cross-Combination Transformer for Image Captioning with Dual Refined Visual Features
[ "Junbo Hu", "Zhixin Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=acJMIXJg2u
@inproceedings{ zhang2024audio, title={Audio Deepfake Detection with Self-Supervised {XLS}-R and {SLS} Classifier}, author={Qishan Zhang and Shuangbing Wen and Tao Hu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=acJMIXJg2u} }
Generative AI technologies, including text-to-speech (TTS) and voice conversion (VC), frequently become indistinguishable from genuine samples, posing challenges for individuals in discerning between real and synthetic content. This indistinguishability undermines trust in media, and the arbitrary cloning of personal voice signals presents significant challenges to privacy and security. In the field of deepfake audio detection, the majority of models achieving higher detection accuracy currently employ self-supervised pre-trained models. However, with the ongoing development of deepfake audio generation algorithms, maintaining high discrimination accuracy against new algorithms grows more challenging. To enhance the sensitivity of deepfake audio features, we propose a deepfake audio detection model that incorporates an SLS (Sensitive Layer Selection) module. Specifically, utilizing the pre-trained XLS-R enables our model to extract diverse audio features from its various layers, each providing distinct discriminative information. Utilizing the SLS classifier, our model captures sensitive contextual information across different layer levels of audio features, effectively employing this information for fake audio detection. Experimental results show that our method achieves state-of-the-art (SOTA) performance on both the ASVspoof 2021 DF and In-the-Wild datasets, with a specific Equal Error Rate (EER) of 1.92\% on the ASVspoof 2021 DF dataset and 7.46\% on the In-the-Wild dataset. Codes and data can be found at https://github.com/QiShanZhang/SLSforADD.
Audio Deepfake Detection with Self-Supervised XLS-R and SLS Classifier
[ "Qishan Zhang", "Shuangbing Wen", "Tao Hu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=aUt0hjq4Sn
@inproceedings{ xu2024gensegnet, title={Ge{NS}eg-Net: A General Segmentation Framework for Any Nucleus in Immunohistochemistry Images}, author={Siyuan Xu and Guannan Li and Haofei Song and Jiansheng Wang and Yan Wang and Qingli Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=aUt0hjq4Sn} }
Immunohistochemistry (IHC) plays a crucial role in understanding disease mechanisms, diagnosing pathology and guiding treatment decisions. The precise analysis heavily depends on accurate nucleus segmentation. However, segmentation is challenging due to significant inter- and intra-nucleus variability in morphology and distribution, stemming from inherent characteristics, imaging techniques, tissue differences and other factors. While current deep learning-based methods have shown promising results, their generalization performance is limited, inevitably requiring specific training data. To address the problem, we propose a novel General framework for Nucleus Segmentation in IHC images (GeNSeg-Net). GeNSeg-Net effectively segments nuclei across diverse tissue types and imaging techniques with high variability using a small subset for training. It comprises an enhancement model and a segmentation model. Initially, all nuclei are enhanced to a uniform morphology with distinct features by the enhancement model through generation. The subsequent segmentation task is thereby simplified, leading to higher accuracy. We design a lightweight generator and discriminator to improve both enhancement quality and computational efficiency. Extensive experiments demonstrate the effectiveness of each component within GeNSeg-Net. Compared to existing methods, GeNSeg-Net achieves state-of-the-art (SOTA) segmentation accuracy and generalization performance on both private and public datasets, while maintaining highly competitive processing speed. Code will be available for research and clinical purposes.
GeNSeg-Net: A General Segmentation Framework for Any Nucleus in Immunohistochemistry Images
[ "Siyuan Xu", "Guannan Li", "Haofei Song", "Jiansheng Wang", "Yan Wang", "Qingli Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=aTcU2fPFXr
@inproceedings{ jiang2024revisiting, title={Revisiting Unsupervised Temporal Action Localization: The Primacy of High-Quality Actionness and Pseudolabels}, author={Han Jiang and Haoyu Tang and Ming Yan and Ji Zhang and Mingzhu Xu and Yupeng Hu and Jihua Zhu and Liqiang Nie}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=aTcU2fPFXr} }
Recently, temporal action localization (TAL) methods, especially the weakly-supervised and unsupervised ones, have become a hot research topic. Existing unsupervised methods follow an iterative "clustering and training" strategy with diverse model designs during training stage, while they often overlook maintaining consistency between these stages, which is crucial: more accurate clustering results can reduce the noises of pseudolabels and thus enhance model training, while more robust training can in turn enrich clustering feature representation. We identify two critical challenges in unsupervised scenarios: 1. What features should the model generate for clustering? 2. Which pseudolabeled instances from clustering should be chosen for model training? After extensive explorations, we proposed a novel yet simple framework called Consistency-Oriented Progressive high actionness Learning to address these issues. For feature generation, our framework adopts a High Actionness snippet Selection (HAS) module to generate more discriminative global video features for clustering from the enhanced actionness features obtained from a designed Inner-Outer Consistency Network (IOCNet). For pseudolabel selection, we introduces a Progressive Learning With Representative Instances (PLRI) strategy to identify the most reliable and informative instances within each cluster for model training. These three modules, HAS, IOCNet, and PLRI, synergistically improve consistency in model training and clustering performance. Extensive experiments on THUMOS’14 and ActivityNet v1.2 datasets under both unsupervised and weakly-supervised settings demonstrate that our framework achieves the state-of-the-art results.
Revisiting Unsupervised Temporal Action Localization: The Primacy of High-Quality Actionness and Pseudolabels
[ "Han Jiang", "Haoyu Tang", "Ming Yan", "Ji Zhang", "Mingzhu Xu", "Yupeng Hu", "Jihua Zhu", "Liqiang Nie" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=aNrVaZlMQ0
@inproceedings{ gao2024retomeva, title={ReToMe-{VA}: Recursive Token Merging for Video Diffusion-based Unrestricted Adversarial Attack}, author={Ziyi Gao and Kai Chen and Zhipeng Wei and Tingshu Mou and Jingjing Chen and Zhiyu Tan and Hao Li and Yu-Gang Jiang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=aNrVaZlMQ0} }
Recent diffusion-based unrestricted attacks generate imperceptible adversarial examples with high transferability compared to previous unrestricted attacks and restricted attacks. However, existing works on diffusion-based unrestricted attacks are mostly focused on images yet are seldom explored in videos. In this paper, we propose the Recursive Token Merging for Video Diffusion-based Unrestricted Adversarial Attack (ReToMe-VA), which is the first framework to generate imperceptible adversarial video clips with higher transferability. Specifically, to achieve spatial imperceptibility, ReToMe-VA adopts a Timestep-wise Adversarial Latent Optimization (TALO) strategy that optimizes perturbations in diffusion models' latent space at each denoising step. TALO offers iterative and accurate updates to generate more powerful adversarial frames. TALO can further reduce memory consumption in gradient computation. Moreover, to achieve temporal imperceptibility, ReToMe-VA introduces a Recursive Token Merging (ReToMe) mechanism by matching and merging tokens across video frames in the self-attention module, resulting in temporally consistent adversarial videos. ReToMe concurrently facilitates inter-frame interactions into the attack process, inducing more diverse and robust gradients, thus leading to better adversarial transferability. Extensive experiments demonstrate the efficacy of ReToMe-VA, particularly in surpassing state-of-the-art attacks in adversarial transferability by more than 14.16\% on average.
ReToMe-VA: Recursive Token Merging for Video Diffusion-based Unrestricted Adversarial Attack
[ "Ziyi Gao", "Kai Chen", "Zhipeng Wei", "Tingshu Mou", "Jingjing Chen", "Zhiyu Tan", "Hao Li", "Yu-Gang Jiang" ]
Conference
poster
2408.05479
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=aMgbnUqSHv
@inproceedings{ wu2024harmony, title={Harmony in Diversity: Improving All-in-One Image Restoration via Multi-Task Collaboration}, author={Gang Wu and Junjun Jiang and Kui Jiang and Xianming Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=aMgbnUqSHv} }
Deep learning-based all-in-one image restoration methods have garnered significant attention in recent years due to capable of addressing multiple degradation tasks. These methods focus on extracting task-oriented information to guide the unified model and have achieved promising results through elaborate architecture design. They commonly adopt a simple mix training paradigm, and the proper optimization strategy for all-in-one tasks has been scarcely investigated. This oversight neglects the intricate relationships and potential conflicts among various restoration tasks, consequently leading to inconsistent optimization rhythms. In this paper, we extend and redefine the conventional all-in-one image restoration task as a multi-task learning problem and propose a straightforward yet effective active-reweighting strategy, dubbed $\textbf{Art}$, to harmonize the optimization of multiple degradation tasks. Art is a plug-and-play optimization strategy designed to mitigate hidden conflicts among multi-task optimization processes. Through extensive experiments on a diverse range of all-in-one image restoration settings, Art has been demonstrated to substantially enhance the performance of existing methods. When incorporated into the AirNet and TransWeather models, it achieves average improvements of $\textbf{1.16}$ dB and $\textbf{1.21}$ dB on PSNR, respectively. We hope this work will provide a principled framework for collaborating multiple tasks in all-in-one image restoration and pave the way for more efficient and effective restoration models, ultimately advancing the state-of-the-art in this critical research domain. Code and pre-trained models are available at our project page https://github.com/Aitical/Art.
Harmony in Diversity: Improving All-in-One Image Restoration via Multi-Task Collaboration
[ "Gang Wu", "Junjun Jiang", "Kui Jiang", "Xianming Liu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=aLK06bsEgo
@inproceedings{ chen2024enabling, title={Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation}, author={Bohong Chen and Yumeng Li and Yao-Xiang Ding and Tianjia Shao and Kun Zhou}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=aLK06bsEgo} }
Current co-speech motion generation approaches usually focus on upper body gestures following speech contents only, while lacking supporting the elaborate control of synergistic full-body motion based on text prompts, such as {\it talking while walking}. The major challenges lie in 1) the existing speech-to-motion datasets only involve highly limited full-body motions, making a wide range of common human activities out of training distribution; 2) these datasets also lack annotated user prompts. To address these challenges, we propose SynTalker, which utilizes the off-the-shelf text-to-motion dataset as an auxiliary for supplementing the missing full-body motion and prompts. The core technical contributions are two-fold. One is the multi-stage training process which obtains an aligned embedding space of motion, speech, and prompts despite the significant distributional mismatch in motion between speech-to-motion and text-to-motion datasets. Another is the diffusion-based conditional inference process, which utilizes the separate-then-combine strategy to realize fine-grained control of local body parts. Extensive experiments are conducted to verify that our approach supports precise and flexible control of synergistic full-body motion generation based on both speeches and user prompts, which is beyond the ability of existing approaches. The code is released on (link will be published upon acceptance).
Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation
[ "Bohong Chen", "Yumeng Li", "Yao-Xiang Ding", "Tianjia Shao", "Kun Zhou" ]
Conference
poster
2410.00464
[ "https://github.com/RobinWitch/SynTalker" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=aE2ZGYxrAe
@inproceedings{ tang2024zenith, title={Zenith: Real-time Identification of {DASH} Encrypted Video Traffic with Distortion}, author={Weitao Tang and Jianqiang Li and Meijie Du and Die Hu and Qingyun Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=aE2ZGYxrAe} }
Some video traffic carries harmful content, such as hate speech and child abuse, primarily encrypted and transmitted through Dynamic Adaptive Streaming over HTTP (DASH). Promptly identifying and intercepting traffic of harmful videos is crucial in network regulation. However, QUIC is becoming another DASH transport protocol in addition to TCP. On the other hand, complex network environments and diverse playback modes lead to significant distortions in traffic. The issues above have not been effectively addressed. This paper proposes a real-time identification method for DASH encrypted video traffic with distortion, named Zenith. We extract stable video segment sequences under various itags as video fingerprints to tackle resolution changes and propose a method of traffic fingerprint extraction under QUIC and VPN. Subsequently, simulating the sequence matching problem as a natural language problem, we propose Traffic Language Model (TLM), which can effectively address video data loss and retransmission. Finally, we propose a frequency dictionary to accelerate Zenith's speed further. Zenith significantly improves accuracy and speed compared to other SOTA methods in various complex scenarios, especially in QUIC, VPN, automatic resolution, and low bandwidth. Zenith requires traffic for just half a minute of video content to achieve precise identification, demonstrating its real-time effectiveness. The project page is available at https://anonymous.4open.science/r/Zenith-Anonymous.
Zenith: Real-time Identification of DASH Encrypted Video Traffic with Distortion
[ "Weitao Tang", "Jianqiang Li", "Meijie Du", "Die Hu", "Qingyun Liu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=aBNrNhMhyO
@inproceedings{ beizhangguo2024lumos, title={Lumos: Optimizing Live 360-degree Video Upstreaming via Spatial-Temporal Integrated Neural Enhancement}, author={BeizhangGuo and Juntao Bao and Baili Chai and Di Wu and Miao Hu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=aBNrNhMhyO} }
As VR devices become increasingly prevalent, live 360-degree video has surged in popularity. However, current live 360-degree video systems heavily rely on uplink bandwidth to deliver high-quality live videos. Recent advancements in neural-enhanced streaming offer a promising solution to this limitation by leveraging server-side computation to conserve bandwidth. Nevertheless, these methods have primarily concentrated on neural enhancement within a single domain (either spatial or temporal), which may not adeptly adapt to diverse video scenarios and fluctuating bandwidth conditions. In this paper, we propose Lumos, a novel spatial-temporal integrated neural-enhanced live 360-degree video streaming system. To accommodate varied video scenarios, we devise a real-time Neural-enhanced Quality Prediction (NQP) model to predict the neural-enhanced quality for different video contents. To cope with varying bandwidth conditions, we design a Content-aware Bitrate Allocator, which dynamically allocates bitrates and selects an appropriate neural enhancement configuration based on the current bandwidth. Moreover, Lumos employs online learning to improve prediction performance and adjust resource utilization to optimize user quality of experience (QoE). Experimental results demonstrate that Lumos surpasses state-of-the-art neural-enhanced systems with an improvement of up to 0.022 in terms of SSIM, translating to an 8.2%-8.5% enhancement in QoE for live stream viewers.
Lumos: Optimizing Live 360-degree Video Upstreaming via Spatial-Temporal Integrated Neural Enhancement
[ "BeizhangGuo", "Juntao Bao", "Baili Chai", "Di Wu", "Miao Hu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=aA31Dsy2SA
@inproceedings{ peng2024towards, title={Towards Video-based Activated Muscle Group Estimation in the Wild}, author={Kunyu Peng and David Schneider and Alina Roitberg and Kailun Yang and Jiaming Zhang and Chen Deng and Kaiyu Zhang and M. Saquib Sarfraz and Rainer Stiefelhagen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=aA31Dsy2SA} }
In this paper, we tackle the new task of video-based Activated Muscle Group Estimation (AMGE) aiming at identifying active muscle regions during physical activity in the wild. To this intent, we provide the MuscleMap dataset featuring >15K video clips with 135 different activities and 20 labeled muscle groups. This dataset opens the vistas to multiple video-based applications in sports and rehabilitation medicine under flexible environment constraints. The proposed MuscleMap dataset is constructed with YouTube videos, specifically targeting High-Intensity Interval Training (HIIT) physical exercise in the wild. To make the AMGE model applicable in real-life situations, it is crucial to ensure that the model can generalize well to numerous types of physical activities not present during training and involving new combinations of activated muscles. To achieve this, our benchmark also covers an evaluation setting where the model is exposed to activity types excluded from the training set. Our experiments reveal that the generalizability of existing architectures adapted for the AMGE task remains a challenge. Therefore, we also propose a new approach, TransM3E, which employs a multi-modality feature fusion mechanism between both the video transformer model and the skeleton-based graph convolution model with novel cross-modal knowledge distillation executed on multi-classification tokens. The proposed method surpasses all popular video classification models when dealing with both, previously seen and new types of physical activities. The contributed dataset and code will be publicly available.
Towards Video-based Activated Muscle Group Estimation in the Wild
[ "Kunyu Peng", "David Schneider", "Alina Roitberg", "Kailun Yang", "Jiaming Zhang", "Chen Deng", "Kaiyu Zhang", "M. Saquib Sarfraz", "Rainer Stiefelhagen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=a9rT60KtXA
@inproceedings{ wang2024modeling, title={Modeling Event-level Causal Representation for Video Classification}, author={Yuqing Wang and Lei Meng and Haokai Ma and Yuqing Wang and Haibei HUANG and Xiangxu Meng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=a9rT60KtXA} }
Classifying videos differs from that of images in the need to capture the information on what has happened, instead of what is in the frames. Conventional methods typically follow the data-driven approach, which uses transformer-based attention models to extract and aggregate the features of video frames as the representation of the entire video. However, this approach tends to extract the object information of frames and may face difficulties in classifying the classes talking about events, such as "fixing bicycle". To address this issue, This paper presents an Event-level Causal Representation Learning (ECRL) model for the spatio-temporal modeling of both the in-frame object interactions and their cross-frame temporal correlations. Specifically, ECRL first employs a Frame-to-Video Causal Modeling (F2VCM) module, which simultaneously builds the in-frame causal graph with the background and foreground information and models their cross-frame correlations to construct a video-level causal graph. Subsequently, a Causality-aware Event-level Representation Inference (CERI) module is introduced to eliminate the spurious correlations in contexts and objects via the back- and front-door interventions, respectively. The former involves visual context de-biasing to filter out background confounders, while the latter employs global-local causal attention to capture event-level visual information. Experimental results on two benchmarking datasets verified that ECRL may better capture the cross-frame correlations to describe videos in event-level features. The source code is provided in the supplementary material.
Modeling Event-level Causal Representation for Video Classification
[ "Yuqing Wang", "Lei Meng", "Haokai Ma", "Yuqing Wang", "Haibei HUANG", "Xiangxu Meng" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=a946OnIlYC
@inproceedings{ xu2024osnerf, title={{OSN}e{RF}: On-demand Semantic Neural Radiance Fields for Fast and Robust 3D Object Reconstruction}, author={Rui Xu and Gaolei Li and Changze Li and Zhaohui Yang and Yuchen Liu and Mingzhe Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=a946OnIlYC} }
By leveraging multi-view inputs to synthesize novel-view images, Neural Radiance Fields (NeRF) have emerged as a prominent technique in the realm of 3D object reconstruction. However, existing methods primarily focus on global scene reconstruction using large datasets, which necessitate substantial computational resources and impose high-quality requirements on input images. Nevertheless, in practical applications, users prioritize the 3D reconstruction results of on-demand specific object (OSO) based on their individual demands . Furthermore, the collected images transmitted through high-interference wireless environment (HIWE) leads to negatively impact the accuracy of NeRF reconstruction, thereby limiting its scalability. In this paper, we propose a novel on-demand Semantic Neural Radiance Fields (OSNeRF) scheme, which offers fast and robust 3D object reconstruction for diverse tasks. Within OSNeRF, semantic encoder is employed to extract core semantic features of OSOs from the collected scene images, semantic decoder is utilized to facilitate robust image recovery under HIWE conditions, lightweight renderer is employed for fast and efficient object reconstruction. Moreover, a semantic control unit (SCU) is introduced to guide above components, thereby enhancing the efficiency of reconstruction. Demonstrative experiments demonstrate that the proposed OSNeRF enables fast and robust object reconstruction in HIWE, surpassing the performance of state-of-the-art (SOTA) methods in terms of reconstruction quality.
OSNeRF: On-demand Semantic Neural Radiance Fields for Fast and Robust 3D Object Reconstruction
[ "Rui Xu", "Gaolei Li", "Changze Li", "Zhaohui Yang", "Yuchen Liu", "Mingzhe Chen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=a7mdX3ZIoj
@inproceedings{ liu2024dualhead, title={Dual-head Genre-instance Transformer Network for Arbitrary Style Transfer}, author={Meichen Liu and Shuting He and Songnan Lin and Bihan Wen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=a7mdX3ZIoj} }
Arbitrary style transfer aims to render artistic features from a style reference onto an image while retaining its original content. Previous methods either focus on learning the holistic style from a specific artist or extracting instance features from a single artwork. However, they often fail to apply style elements uniformly across the entire image and lack adaptation to the style of different artworks. To solve these issues, our key insight is that the art genre has better generality and adaptability than the overall features of the artist. To this end, we propose a Dual-head Genre-instance Transformer (DGiT) framework to simultaneously capture the genre and instance features for arbitrary style transfer. To the best of our knowledge, this is the first work to integrate the genre features and instance features to generate a high-quality stylized image. Moreover, we design two contrastive losses to enhance the capability of the network to capture two style features. Our approach ensures the uniform distribution of the overall style across the stylized image while enhancing the details of textures and strokes in local regions. Qualitative and quantitative evaluations demonstrate that our approach exhibits its superior performance in terms of visual qualitative and efficiency.
Dual-head Genre-instance Transformer Network for Arbitrary Style Transfer
[ "Meichen Liu", "Shuting He", "Songnan Lin", "Bihan Wen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=a6vvqnXLIU
@inproceedings{ li2024learning, title={Learning from Concealed Labels}, author={Zhongnian Li and Meng Wei and Peng Ying and Tongfeng Sun and Xinzheng Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=a6vvqnXLIU} }
Annotating data for sensitive labels (e.g., disease, smoking) poses a potential threats to individual privacy in many real-world scenarios. To cope with this problem, we propose a novel setting to protect privacy of each instance, namely learning from concealed labels for multi-class classification. Concealed labels prevent sensitive labels from appearing in the label set during the label collection stage, as shown in Figure 1, which specifies none and some random sampled insensitive labels as concealed labels set to annotate sensitive data. In this paper, an unbiased estimator can be established from concealed data under mild assumptions, and the learned multi-class classifier can not only classify the instance from insensitive labels accurately but also recognize the instance from the sensitive labels. Moreover, we bound the estimation error and show that the multi-class classifier achieves the optimal parametric convergence rate. Experiments demonstrate the significance and effectiveness of the proposed method for concealed labels in synthetic and real-world datasets.
Learning from Concealed Labels
[ "Zhongnian Li", "Meng Wei", "Peng Ying", "Tongfeng Sun", "Xinzheng Xu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ZyQWfb95zE
@inproceedings{ li2024efficient, title={Efficient Face Super-Resolution via Wavelet-based Feature Enhancement Network}, author={wenjie li and Heng Guo and Xuannan Liu and Kongming Liang and Jiani Hu and Zhanyu Ma and Jun Guo}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ZyQWfb95zE} }
Face super-resolution aims to reconstruct a high-resolution face image from a low-resolution face image. Previous methods typically employ an encoder-decoder structure to extract facial structural features, where the direct downsampling inevitably introduces distortions, especially to high-frequency features such as edges. To address this issue, we propose a wavelet-based feature enhancement network, which mitigates feature distortion by losslessly decomposing the input facial feature into high-frequency and low-frequency components using the wavelet transform and processing them separately. To improve the efficiency of facial feature extraction, a full domain Transformer is further proposed to enhance local, regional, and global low-frequency facial features. Such designs allow our method to perform better without stacking many network modules as previous methods did. Extensive experiments demonstrate that our method effectively balances performance, model size, and inference speed. All code and data will be released upon acceptance.
Efficient Face Super-Resolution via Wavelet-based Feature Enhancement Network
[ "wenjie li", "Heng Guo", "Xuannan Liu", "Kongming Liang", "Jiani Hu", "Zhanyu Ma", "Jun Guo" ]
Conference
poster
2407.19768
[ "https://github.com/pris-cv/wfen" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ZyKdUtuUjZ
@inproceedings{ yang2024consistentavatar, title={ConsistentAvatar: Learning to Diffuse Fully Consistent Talking Head Avatar with Temporal Guidance}, author={Haijie Yang and Zhenyu Zhang and Hao Tang and Jianjun Qian and Jian Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ZyKdUtuUjZ} }
Diffusion models have shown impressive potential on talking head generation. While plausible appearance and talking effect are achieved, these methods still suffers from temporal, 3D or expression inconsistency due to the error accumulation and inherent limitation of single-image generation ability. In this paper, we propose ConsistentAvatar, a novel framework for fully consistent and high-fidelity talking avatar generation. Instead of directly employing multi-modal conditions to the diffusion process, our method learns to first model the temporal representation for stability between adjacent frames. Specifically, we propose a Temporally-Sensitive Detail (TSD) map containing high-frequency feature and contour that vary significantly along time axis. Using a temporal consistent diffusion module, we learn to align TSD of the initial result to that of the video frame ground truth. The final avatar is generated by a fully consistent diffusion module, conditioned on the aligned TSD, rough head normal, and emotion prompt embedding. We find that the aligned TSD, which represents the temporal patterns, constrains the diffusion process to generate temporally stable talking head. Further, its reliable guidance complements the inaccuracy of other conditions, suppressing the accumulated error while improving the consistency on various aspects. Extensive experiments demonstrate that ConsistentAvatar outperforms the state-of-the-art methods on the generated appearance, 3D, expression and temporal consistency.
ConsistentAvatar: Learning to Diffuse Fully Consistent Talking Head Avatar with Temporal Guidance
[ "Haijie Yang", "Zhenyu Zhang", "Hao Tang", "Jianjun Qian", "Jian Yang" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Zqi6eZri8Z
@inproceedings{ zhou2024subjective, title={Subjective and Objective Quality-of-Experience Assessment for 3D Talking Heads}, author={Yingjie Zhou and Zicheng Zhang and Wei Sun and Xiaohong Liu and Xiongkuo Min and Guangtao Zhai}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Zqi6eZri8Z} }
In recent years, immersive communication has emerged as a compelling alternative to traditional video communication methods. One prospective avenue for immersive communication involves augmenting the user's immersive experience through the transmission of three-dimensional (3D) talking heads (THs). However, transmitting 3D THs poses significant challenges due to its complex and voluminous nature, often leading to pronounced distortion and a compromised user experience. Addressing this challenge, we introduce the 3D Talking Heads Quality Assessment (THQA-3D) dataset, comprising 1,000 sets of distorted and 50 original TH mesh sequences (MSs), to facilitate quality assessment in 3D TH transmission. A subjective experiment, characterized by a novel interactive approach, is conducted with recruited participants to assess the quality of MSs in THQA-3D dataset. Leveraging this dataset, we also propose a multimodal Quality-of-Experience (QoE) method incorporating a Large Quality Model (LQM). This method involves frontal projection of MSs and subsequent rendering into videos, with quality assessment facilitated by the LQM and a variable-length video memory filter (VVMF). Additionally, tone-lip coherence and silence detection techniques are employed to characterize audio-visual coherence in 3D MS streams. Experimental evaluation demonstrates the proposed method's superiority, achieving state-of-the-art performance on the THQA-3D dataset and competitiveness on other QoE datasets. Both the THQA-3D dataset and the QoE model have been publicly released at https://github.com/zyj-2000/THQA-3D.
Subjective and Objective Quality-of-Experience Assessment for 3D Talking Heads
[ "Yingjie Zhou", "Zicheng Zhang", "Wei Sun", "Xiaohong Liu", "Xiongkuo Min", "Guangtao Zhai" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Zo4P2F7xLY
@inproceedings{ du2024multicolor, title={MultiColor: Image Colorization by Learning from Multiple Color Spaces}, author={Xiangcheng Du and Zhao Zhou and Xingjiao Wu and Yanlong Wang and Zhuoyao Wang and Yingbin Zheng and Cheng Jin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Zo4P2F7xLY} }
Deep networks have shown impressive performance in the image restoration tasks, such as image colorization. However, we find that previous approaches rely on the digital representation from single color model with a specific mapping function, a.k.a., color space, during the colorization pipeline. In this paper, we first investigate the modeling of different color spaces, and find each of them exhibiting distinctive characteristics with unique distribution of colors. The complementarity among multiple color spaces leads to benefits for the image colorization task. We present MultiColor, a new learning-based approach to automatically colorize grayscale images that combines clues from multiple color spaces. Specifically, we employ a set of dedicated colorization modules for individual color space. Within each module, a transformer decoder is first employed to refine color query embeddings and then a color mapper produces color channel prediction using the embeddings and semantic features. With these predicted color channels representing various color spaces, a complementary network is designed to exploit the complementarity and generate pleasing and reasonable colorized images. We conduct extensive experiments on real-world datasets, and the results demonstrate superior performance over the state-of-the-arts. The code will be available.
MultiColor: Image Colorization by Learning from Multiple Color Spaces
[ "Xiangcheng Du", "Zhao Zhou", "Xingjiao Wu", "Yanlong Wang", "Zhuoyao Wang", "Yingbin Zheng", "Cheng Jin" ]
Conference
poster
2408.04172
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ZiqXfnfGtZ
@inproceedings{ jia2024discontrolface, title={DisControlFace: Adding Disentangled Control to Diffusion Autoencoder for One-shot Explicit Facial Image Editing}, author={Haozhe Jia and Yan Li and Hengfei Cui and Di Xu and Yuwang Wang and Tao Yu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ZiqXfnfGtZ} }
In this work, we focus on exploring explicit fine-grained control of generative facial image editing, all while generating faithful facial appearances and consistent semantic details, which however, is quite challenging and has not been extensively explored, especially under an one-shot scenario. We identify the key challenge as the exploration of disentangled conditional control between high-level semantics and explicit parameters (e.g., 3DMM) in the generation process, and accordingly propose a novel diffusion-based editing framework, named DisControlFace. Specifically, we leverage a Diffusion Autoencoder (Diff-AE) as the semantic reconstruction backbone. To enable explicit face editing, we construct an Exp-FaceNet that is compatible with Diff-AE to generate spatial-wise explicit control conditions based on estimated 3DMM parameters. Different from current diffusion-based editing methods that train the whole conditional generative model from scratch, we freeze the pre-trained weights of the Diff-AE to maintain its semantically deterministic conditioning capability and accordingly propose a random semantic masking (RSM) strategy to effectively achieve an independent training of Exp-FaceNet. This setting endows the model with disentangled face control meanwhile reducing semantic information shift in editing. Our model can be trained using 2D in-the-wild portrait images without requiring 3D or video data and perform robust editing on any new facial image through a simple one-shot fine-tuning. Comprehensive experiments demonstrate that DisControlFace can generate realistic facial images with better editing accuracy and identity preservation over state-of-the-art methods.
DisControlFace: Adding Disentangled Control to Diffusion Autoencoder for One-shot Explicit Facial Image Editing
[ "Haozhe Jia", "Yan Li", "Hengfei Cui", "Di Xu", "Yuwang Wang", "Tao Yu" ]
Conference
poster
2312.06193
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ZaJaXNMsqg
@inproceedings{ deng2024advancing, title={Advancing Semantic Edge Detection through Cross-Modal Knowledge Learning}, author={Ruoxi Deng and Bin Yu and Jinxuan Lu and Caixia Zhou and Zhao-Min Chen and Jie Hu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ZaJaXNMsqg} }
Semantic edge detection (SED) is pivotal for the precise demarcation of object boundaries, yet it faces ongoing challenges due to the prevalence of low-quality labels in current methods. In this paper, we present a novel solution to bolster SED through the encoding of both language and image data. Distinct from antecedent language-driven techniques, which predominantly utilize static elements such as dataset labels, our method taps into the dynamic language content that details the objects in each image and their interrelations. By encoding this varied input, we generate integrated features that utilize semantic insights to refine the high-level image features and the ultimate mask representations. This advancement markedly betters the quality of these features and elevates SED performance. Experimental evaluation on benchmark datasets, including SBD and Cityscape, showcases the efficacy of our method, achieving leading ODS F-scores of 79.0 and 76.0, respectively. Our approach signifies a notable advancement in SED technology by seamlessly integrating multimodal textual information, embracing both static and dynamic aspects.
Advancing Semantic Edge Detection through Cross-Modal Knowledge Learning
[ "Ruoxi Deng", "Bin Yu", "Jinxuan Lu", "Caixia Zhou", "Zhao-Min Chen", "Jie Hu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ZXTY6h5f8d
@inproceedings{ zhang2024treereward, title={TreeReward: Improve Diffusion Model via Tree-Structured Feedback Learning}, author={Jiacheng Zhang and Jie Wu and Huafeng Kuang and Haiming Zhang and Yuxi Ren and Weifeng Chen and Manlin Zhang and Xuefeng Xiao and Rui Wang and Shilei Wen and Guanbin Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ZXTY6h5f8d} }
Recently, there has been significant progress in leveraging human feedback to enhance image generation, leading to the emergence of a rapidly evolving research area. However, current work faces several critical challenges: i) insufficient data quantity; and ii) rough feedback learning; To tackle these challenges, we present TreeReward, a novel multi-dimensional, fine-grained, and adaptive feedback learning framework that aims to improve both the semantic and aesthetic aspects of diffusion models. Specifically, To address the limitation of the fine-grained feedback data, we first design an efficient feedback data construction pipeline in an "AI + Expert" fashion, yielding about 2.2M high-quality feedback dataset encompassing six fine-grained dimensions. Built upon this, we introduce a tree-structure reward model to exploit the fine-grained feedback data efficiently and provide tailored optimization during feedback learning. Extensive experiments on both Stable Diffusion v1.5 (SD1.5) and Stable Diffusion XL (SDXL) demonstrate the effectiveness of our method in enhancing the general and fine-grained generation performance and the generalizability of downstream tasks.
TreeReward: Improve Diffusion Model via Tree-Structured Feedback Learning
[ "Jiacheng Zhang", "Jie Wu", "Huafeng Kuang", "Haiming Zhang", "Yuxi Ren", "Weifeng Chen", "Manlin Zhang", "Xuefeng Xiao", "Rui Wang", "Shilei Wen", "Guanbin Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ZXHl9tUbm1
@inproceedings{ jiang2024a, title={A General Framework to Boost 3D {GS} Initialization for Text-to-3D Generation by Lexical Richness}, author={Lutao Jiang and Hangyu Li and Lin Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ZXHl9tUbm1} }
Text-to-3D content creation has recently received much attention, especially with the prevalence of 3D Gaussians Splatting (3D GS). In general, GS-based methods comprise two key stages: initialization and rendering optimization. To achieve initialization, existing works directly apply random sphere initialization or 3D diffusion models, e.g., Point-E, to derive the initial shapes. However, such strategies suffer from two critical yet challenging problems: 1) the final shapes are still similar to the initial ones even after training; 2) shapes can be produced only from simple texts, \eg, '*a dog*', not for lexically richer (or harder) texts, \eg, `*a dog is sitting on the top of the airplane*'. To address these problems, this paper proposes a novel general framework to boost the 3D GS Initialization for text-to-3D generation upon the lexical richness. Our key idea is to aggregate 3D Gaussians into spatially uniform voxels to represent complex shapes while enabling the spatial interaction among the 3D Gaussians and semantic interaction between Gaussians and texts. Specifically, we first construct a voxelized representation, where each voxel holds a 3D Gaussian with its position, scale, and rotation fixed while setting opacity as the sole factor to determine a position's occupancy. We then design an initialization network mainly consisting of two novel components: 1) Global Information Perception (GIP) block and 2) Gaussians-Text Fusion (GTF) block. Such a design enables each 3D Gaussian to assimilate the spatial information from other areas and semantic information from texts. Extensive experiments show the superiority of our framework of high-quality 3D GS initialization against the existing methods, e.g., Shap-E, by taking lexically *simple*, *medium*, and *hard* texts. Also, our framework can be seamlessly plugged into state-of-the-art training frameworks, e.g., LucidDreamer for semantically consistent text-to-3D generation. Our code will be released upon acceptance.
A General Framework to Boost 3D GS Initialization for Text-to-3D Generation by Lexical Richness
[ "Lutao Jiang", "Hangyu Li", "Lin Wang" ]
Conference
poster
2408.01269
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ZRpxEuTYYi
@inproceedings{ shen2024studentoriented, title={Student-Oriented Teacher Knowledge Refinement for Knowledge Distillation}, author={Chaomin Shen and Yaomin Huang and HaoKun Zhu and Jinsong Fan and Guixu Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ZRpxEuTYYi} }
Knowledge distillation has become widely recognized for its ability to transfer knowledge from a large teacher network to a compact and more streamlined student network. % Traditional knowledge distillation methods primarily follow a teacher-oriented paradigm that imposes the task of learning the teacher's complex knowledge onto the student network. However, significant disparities in model capacity and architectural design hinder students' comprehension of the complex knowledge imparted by the teacher, resulting in sub-optimal learning results. % This paper introduces a novel approach that emphasizes a student-oriented perspective and refining the teacher's knowledge to better align with the student's needs, thereby improving knowledge transfer effectiveness. % Specifically, we present the Student-Oriented Knowledge Distillation (SoKD), which incorporates a learnable feature augmentation strategy during training to dynamically refine the teacher's knowledge of the student. % Furthermore, we deploy the Distinctive Area Detection Module (DAM) to identify areas of mutual interest between the teacher and student, concentrating knowledge transfer within these critical areas to avoid spreading irrelevant information. This targeted approach ensures a more focused and effective knowledge distillation process. % Our approach, functioning as a plug-in, could be integrated with various knowledge distillation methods. Extensive experimental results demonstrate the efficacy and generalizability of our method.
Student-Oriented Teacher Knowledge Refinement for Knowledge Distillation
[ "Chaomin Shen", "Yaomin Huang", "HaoKun Zhu", "Jinsong Fan", "Guixu Zhang" ]
Conference
poster
2409.18785
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ZR3JAjWAom
@inproceedings{ wei2024understanding, title={Understanding the Impact of {AI}-Generated Content on Social Media: The Pixiv Case}, author={Yiluo Wei and Gareth Tyson}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ZR3JAjWAom} }
In the last two years, Artificial Intelligence Generated Content (AIGC) has received significant attention, leading to an anecdotal rise in the amount of AIGC being shared via social media platforms. The impact of AIGC and its implications are of key importance to social platforms, e.g., regarding the implementation of policies, community formation, and algorithmic design. Yet, to date, we know little about how the arrival of AIGC has impacted the social media ecosystem. To fill this gap, we present a comprehensive study of Pixiv, an online community for artists who wish to share and receive feedback on their illustrations. Pixiv hosts over 100 million artistic submissions and receives more than 1 billion page views per month (as of 2023). Importantly, it allows both human and AI generated content to be uploaded. Exploiting this, we perform the first analysis of the impact that AIGC has had on the social media ecosystem, through the lens of Pixiv. Based on a dataset of 15.2 million posts (including 2.4 million AI-generated images), we measure the impact of AIGC on the Pixiv community, as well as the differences between AIGC and human-generated content in terms of content creation and consumption patterns. Our results offer key insight to how AIGC is changing the dynamics of social media platforms like Pixiv.
Understanding the Impact of AI-Generated Content on Social Media: The Pixiv Case
[ "Yiluo Wei", "Gareth Tyson" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ZQMXJwN3VL
@inproceedings{ zhou2024hydrodynamicsinformed, title={Hydrodynamics-Informed Neural Network for Simulating Dense Crowd Motion Patterns}, author={Yanshan Zhou and Pingrui Lai and Jiaqi Yu and Yingjie Xiong and Hua Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ZQMXJwN3VL} }
With global occurrences of crowd crushes and stampedes, dense crowd simulation has been drawing great attention. In this research, our goal is to simulate dense crowd motions under six classic motion patterns, more specifically, to generate subsequent motions of dense crowds from the given initial states. Since dense crowds share similarities with fluids, such as continuity and fluidity, one common approach for dense crowd simulation is to construct hydrodynamics-based models, which consider dense crowds as fluids, guide crowd motions with Navier-Stokes equations, and conduct dense crowd simulation by solving governing equations. Despite the proposal of these models, dense crowd simulation faces multiple challenges, including the difficulty of directly solving Navier-Stokes equations due to their nonlinear nature, the ignorance of distinctive crowd characteristics which fluids lack, and the gaps in the evaluation and validation of crowd simulation models. To address the above challenges, we build a hydrodynamic model, which captures the crowd physical properties (continuity, fluidity, etc.) with Navier-Stokes equations and reflects the crowd social properties (sociality, personality, etc.) with operators that describe crowd interactions and crowd-environment interactions. To tackle the computational problem, we propose to solve the governing equation based on Navier-Stokes equations using neural networks, and introduce the Hydrodynamics-Informed Neural Network (HINN) which preserves the structure of the governing equation in its network architecture. To facilitate the evaluation, we construct a new dense crowd motion video dataset called Dense Crowd Flow Dataset (DCFD), containing six classic motion patterns (line, curve, circle, cross, cluster and scatter) and 457 video clips, which can serve as the groundtruths for various objective metrics. Numerous experiments are conducted using HINN to simulate dense crowd motions under six motion patterns with video clips from DCFD. Objective evaluation metrics that concerns authenticity, fidelity and diversity demonstrate the superior performance of our model in dense crowd simulation compared to other simulation models.
Hydrodynamics-Informed Neural Network for Simulating Dense Crowd Motion Patterns
[ "Yanshan Zhou", "Pingrui Lai", "Jiaqi Yu", "Yingjie Xiong", "Hua Yang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0