bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
215
445
abstract
stringlengths
820
2.37k
title
stringlengths
24
147
authors
sequencelengths
1
13
id
stringclasses
1 value
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
33 values
n_linked_authors
int64
-1
4
upvotes
int64
-1
21
num_comments
int64
-1
4
n_authors
int64
-1
11
Models
sequencelengths
0
1
Datasets
sequencelengths
0
1
Spaces
sequencelengths
0
4
old_Models
sequencelengths
0
1
old_Datasets
sequencelengths
0
1
old_Spaces
sequencelengths
0
4
paper_page_exists_pre_conf
int64
0
1
null
https://openreview.net/forum?id=IcuqIcD5Lt
@inproceedings{ lin2024hidemia, title={Hide{MIA}: Hidden Wavelet Mining for Privacy-Enhancing Medical Image Analysis}, author={Xun Lin and Yi Yu and Zitong YU and Ruohan Meng and Jiale Zhou and Ajian Liu and Yizhong Liu and Shuai Wang and Wenzhong Tang and Zhen Lei and Alex Kot}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=IcuqIcD5Lt} }
Despite the advancements that deep learning has brought to medical image analysis (MIA), protecting the privacy of images remains a challenge. In a client-server MIA framework, especially after deployment, patients' private medical images can be easily captured by attackers from the transmission channel or malicious third-party servers. Previous MIA privacy-enhancing methods, whether based on distortion or homomorphic encryption, expose the fact that the transmitted images are medical images or transform the images into semantic-lacking noise. This tends to alert attackers, thereby falling into a cat-and-mouse game of theft and protection. To address this issue, we propose a covert MIA framework based on deep image hiding, namely HideMIA, which secures medical images by embedding them within natural cover images that are unlikely to raise suspicion. By directly analyzing the hidden medical images in the steganographic domain, HideMIA makes it difficult for attackers to notice the presence of medical images. Specifically, we propose the Mixture-of-Difference-Convolutions (MoDC) and Asymmetric Wavelet Attention (AsyWA) to enable HideMIA to conduct fine-grained analysis on each wavelet sub-band within the steganographic domain, mining features that are specific to medical images. Moreover, to reduce resource consumption on client devices, we design function-aligned knowledge distillation to obtain a lightweight hiding network, namely LightIH. Extensive experiments on six medical datasets demonstrate that our HideMIA achieves superior MIA performance and protective imperceptibility on medical image segmentation and classification.
HideMIA: Hidden Wavelet Mining for Privacy-Enhancing Medical Image Analysis
[ "Xun Lin", "Yi Yu", "Zitong YU", "Ruohan Meng", "Jiale Zhou", "Ajian Liu", "Yizhong Liu", "Shuai Wang", "Wenzhong Tang", "Zhen Lei", "Alex Kot" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Ibyok7prlJ
@inproceedings{ qiao2024capnet, title={{CAPN}et: Cartoon Animal Parsing with Spatial Learning and Structural Modeling}, author={Jian-Jun Qiao and Meng-Yu Duan and Xiao Wu and Wei Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Ibyok7prlJ} }
Cartoon animal parsing aims to segment the body parts such as heads, arms, legs and tails from cartoon animals. Different from previous parsing tasks, cartoon animal parsing faces new challenges, including irregular body structures, abstract drawing styles and diverse animal categories. Existing methods have difficulties when addressing these challenges caused by the spatial and structural properties of cartoon animals. To address these challenges, a novel spatial learning and structural modeling network, named CAPNet, is proposed for cartoon animal parsing. It aims to address the critical problems of spatial perception, structure modeling and spatial-structural consistency learning. A spatial-aware learning module integrates deformable convolutions to learn spatial features of diverse cartoon animals. The multi-task edge and center point prediction mechanism is incorporated to capture the intricate spatial patterns. A structural modeling method is proposed to model the complex structural representations of cartoon animals, which integrates a graph neural network with a shape-aware relation learning module. To mitigate the significant differences among animals, a spatial and structural consistency learning strategy is proposed to capture and learn feature correlations across different animal species. Extensive experiments conducted on benchmark datasets demonstrate the effectiveness of the proposed approach, which outperforms state-of-the-art methods.
CAPNet: Cartoon Animal Parsing with Spatial Learning and Structural Modeling
[ "Jian-Jun Qiao", "Meng-Yu Duan", "Xiao Wu", "Wei Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=IXM8eL4KvE
@inproceedings{ shuyuan2024exploring, title={Exploring the Robustness of Decision-Level Through Adversarial Attacks on {LLM}-Based Embodied Models}, author={Liu Shuyuan and Jiawei Chen and Shouwei Ruan and Hang Su and ZHAOXIA YIN}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=IXM8eL4KvE} }
Embodied intelligence empowers agents with a profound sense of perception, enabling them to respond in a manner closely aligned with real-world situations. Large Language Models (LLMs) delve into language instructions with depth, serving a crucial role in generating plans for intricate tasks. Thus, LLM-based embodied models further enhance the agent's capacity to comprehend and process information. However, this amalgamation also ushers in new challenges in the pursuit of heightened intelligence. Specifically, attackers can manipulate LLMs to produce irrelevant or even malicious outputs by altering their prompts. Confronted with this challenge, we observe a notable absence of multi-modal datasets essential for comprehensively evaluating the robustness of LLM-based embodied models. Consequently, we construct the Embodied Intelligent Robot Attack Dataset (EIRAD), tailored specifically for robustness evaluation. Additionally, two attack strategies are devised, including untargeted attacks and targeted attacks, to effectively simulate a range of diverse attack scenarios. At the same time, during the attack process, to more accurately ascertain whether our method is successful in attacking the LLM-based embodied model, we devise a new attack success evaluation method utilizing the BLIP2 model. Recognizing the time and cost-intensive nature of the GCG algorithm in attacks, we devise a scheme for prompt suffix initialization based on various target tasks, thus expediting the convergence process. Experimental results demonstrate that our method exhibits a superior attack success rate when targeting LLM-based embodied models, indicating a lower level of decision-level robustness in these models.
Exploring the Robustness of Decision-Level Through Adversarial Attacks on LLM-Based Embodied Models
[ "Liu Shuyuan", "Jiawei Chen", "Shouwei Ruan", "Hang Su", "ZHAOXIA YIN" ]
Conference
poster
2405.19802
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=IRiDIMJ8Zi
@inproceedings{ zhang2024vamark, title={V2A-Mark: Versatile Deep Visual-Audio Watermarking for Manipulation Localization and Copyright Protection}, author={Xuanyu Zhang and Youmin Xu and Runyi Li and Jiwen Yu and Weiqi Li and Zhipei Xu and Jian Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=IRiDIMJ8Zi} }
AI-generated video has revolutionized short video production, filmmaking, and personalized media, making video local editing an essential tool. However, this progress also blurs the line between reality and fiction, posing challenges in multimedia forensics. To solve this urgent issue, V2A-Mark is proposed to address the limitations of current video tampering forensics, such as poor generalizability, singular function, and single modality focus. Combining the fragility of video-into-video steganography with deep robust watermarking, our method can embed invisible visual-audio localization watermarks and copyright watermarks into the original video frames and audio, enabling precise manipulation localization and copyright protection. We also design a temporal alignment and fusion module and degradation prompt learning to enhance the localization accuracy and decoding robustness. Meanwhile, we introduce a sample-level audio localization method and a cross-modal copyright extraction mechanism to couple the information of audio and video frames. The effectiveness of V2A-Mark has been verified on a visual-audio tampering dataset, emphasizing its superiority in localization precision and copyright accuracy, crucial for the sustainable development of video editing in the AIGC video era.
V2A-Mark: Versatile Deep Visual-Audio Watermarking for Manipulation Localization and Copyright Protection
[ "Xuanyu Zhang", "Youmin Xu", "Runyi Li", "Jiwen Yu", "Weiqi Li", "Zhipei Xu", "Jian Zhang" ]
Conference
poster
2404.16824
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=IGICCraedH
@inproceedings{ tian2024dynamic, title={Dynamic Mixed-Prototype Model for Incremental Deepfake Detection}, author={Jiahe Tian and Cai Yu and Peng Chen and Zihao Xiao and Xi Wang and Jizhong Han and Yesheng Chai}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=IGICCraedH} }
The rapid advancement of deepfake technology poses significant threats to social trust. Although recent deepfake detectors have exhibited promising results on deepfakes of the same type as those present in training, their effectiveness degrades significantly on novel deepfakes crafted by unseen algorithms due to the gap in forgery patterns. Some studies have enhanced detectors by adapting to the continuously emerging deepfakes through incremental learning. Despite the progress, they overlooked the scarcity of novel samples that can easily lead to insufficient learning of forgery patterns. To mitigate this issue, we introduce the Dynamic Mixed-Prototype (DMP) model, which dynamically increases prototypes to adapt to novel deepfakes efficiently. Specifically, the DMP model adopts multiple prototypes to represent both real and fake classes, enabling learning novel patterns by expanding prototypes and jointly retaining knowledge learned in previous prototypes. Furthermore, we propose the Prototype-Guided Replay strategy and Prototype Representation Distillation loss, both of which effectively prevent forgetting learned knowledge based on the prototypical representation of samples. Our method surpasses existing incremental deepfake detectors across four datasets and exhibits superior generalizability to novel deepfakes through learning limited deepfake samples.
Dynamic Mixed-Prototype Model for Incremental Deepfake Detection
[ "Jiahe Tian", "Cai Yu", "Peng Chen", "Zihao Xiao", "Xi Wang", "Jizhong Han", "Yesheng Chai" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=I9y11jzmNz
@inproceedings{ liu2024label, title={Label Text-aided Hierarchical Semantics Mining for Panoramic Activity Recognition}, author={Tianshan Liu and Kin-man Lam and Bingkun BAO}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=I9y11jzmNz} }
Panoramic activity recognition is a comprehensive yet challenging task in crowd scene understanding, which aims to concurrently identify multi-grained human behaviors, including individual actions, social group activities, and global activities. Previous studies tend to capture cross-granularity activity-semantics relations from solely the video input, thus ignoring the intrinsic semantic hierarchy in label-text space. To this end, we propose a label text-aided hierarchical semantics mining (THSM) framework, which explores multi-level cross-modal associations by learning hierarchical semantic alignment between visual content and label texts. Specifically, a hierarchical encoder is first constructed to encode the visual and text inputs into semantics-aligned representations at different granularities. To fully exploit the cross-modal semantic correspondence learned by the encoder, a hierarchical decoder is further developed, which progressively integrates the lower-level representations with the higher-level contextual knowledge for coarse-to-fine action/activity recognition. Extensive experimental results on the public JRDB-PAR benchmark validate the superiority of the proposed THSM framework over state-of-the-art methods.
Label Text-aided Hierarchical Semantics Mining for Panoramic Activity Recognition
[ "Tianshan Liu", "Kin-man Lam", "Bingkun BAO" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=I8s3kmD82D
@inproceedings{ dong2024unidense, title={UniDense: Unleashing Diffusion Models with Meta-Routers for Universal Few-Shot Dense Prediction}, author={Lintao Dong and Wei Zhai and Zheng-Jun Zha}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=I8s3kmD82D} }
Universal few-shot dense prediction requires a versatile model capable of learning any dense prediction task from limited labeled images, which necessitates the model to possess efficient adaptation abilities. Prevailing few-shot learning methods rely on efficient fine-tuning of model weights for few-shot adaptation, which carries the risk of disrupting the pre-trained knowledge and lacks the capability to extract task-specific knowledge contained in the pre-trained model. To overcome these limitations, our paper approaches universal few-shot dense prediction from a novel perspective. Unlike conventional fine-tuning techniques that directly use all parameters of the model and modify a specific set of weights for few-shot adaptation, our method focuses on selecting the task-relevant computation pathways of the pre-trained model while keeping the model weights frozen. Building upon this idea, we introduce a novel framework UniDense for universal few-shot dense prediction. First, we construct a versatile MoE architecture for dense prediction based on the Stable Diffusion model. We then utilize episodes-based meta-learning to train a set of routers for this MoE model, called Meta-Routers, which act as hyper-networks responsible for selecting computation blocks relevant to each task. We demonstrate that fine-tuning these meta-routers for novel tasks enables efficient adaptation of the entire model. Moreover, for each few-shot task, we leverage support samples to extract a task embedding, which serves as a conditioning factor for meta-routers. This strategy allows meta-routers to dynamically adapt themselves for different few-shot task, leading to improved adaptation performance. Experiments on a challenging variant of Taskonomy dataset with 10 dense prediction tasks demonstrate the superiority of our approach.
UniDense: Unleashing Diffusion Models with Meta-Routers for Universal Few-Shot Dense Prediction
[ "Lintao Dong", "Wei Zhai", "Zheng-Jun Zha" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=I4EJdRTVaf
@inproceedings{ haas2024towards, title={Towards Trustworthy MetaShopping: Studying Manipulative Audiovisual Designs in Virtual-Physical Commercial Platforms}, author={Esmee Henrieke Anne de Haas and LIK-HANG LEE and Yiming Huang and Carlos BERMEJO FERNANDEZ and Pan Hui and Zijun Lin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=I4EJdRTVaf} }
E-commerce has emerged as a significant endeavour in which technological advancements influence the shopping experience. Simultaneously, the metaverse is the next breakthrough to transform multimedia engagement. However, under such situations, deceptive designs aimed at deceiving users into making desired choices might be more successful. This paper proposes the design space of manipulative techniques in e-commerce applications for the metaverse. We construct our arguments by evaluating user interaction with manipulative design in metaverse shopping experiences, followed by a survey among users to understand the effect of counteracting manipulative e-commerce scenarios. Our findings can reinforce understanding of design guidelines according to metaverse e-commerce experiences and the possibility of opportunities to improve user awareness of manipulative experiences.
Towards Trustworthy MetaShopping: Studying Manipulative Audiovisual Designs in Virtual-Physical Commercial Platforms
[ "Esmee Henrieke Anne de Haas", "LIK-HANG LEE", "Yiming Huang", "Carlos BERMEJO FERNANDEZ", "Pan Hui", "Zijun Lin" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Hv25R70JwR
@inproceedings{ liu2024private, title={Private Gradient Estimation is Useful for Generative Modeling}, author={Bochao Liu and Pengju Wang and Weijia Guo and Yong Li and Liansheng Zhuang and Weiping Wang and Shiming Ge}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Hv25R70JwR} }
While generative models have proved successful in many domains, they may pose a privacy leakage risk in practical deployment. To address this issue, differentially private generative model learning has emerged as a solution to train private generative models for different downstream tasks. However, existing private generative modeling approaches face significant challenges in generating high-dimensional data due to the inherent complexity involved in modeling such data. In this work, we present a new private generative modeling approach where samples are generated via Hamiltonian dynamics with gradients of the private dataset estimated by a well-trained network. In the approach, we achieve differential privacy by perturbing the projection vectors in the estimation of gradients with sliced score matching. In addition, we enhance the reconstruction ability of the model by incorporating a residual enhancement module during the score matching. For sampling, we perform Hamiltonian dynamics with gradients estimated by the well-trained network, allowing the sampled data close to the private dataset's manifold step by step. In this way, our model is able to generate data with a resolution of 256$\times$256. Extensive experiments and analysis clearly demonstrate the effectiveness and rationality of the proposed approach.
Private Gradient Estimation is Useful for Generative Modeling
[ "Bochao Liu", "Pengju Wang", "Weijia Guo", "Yong Li", "Liansheng Zhuang", "Weiping Wang", "Shiming Ge" ]
Conference
oral
2305.10662
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=HtdBUEbl4X
@inproceedings{ lin2024semnft, title={Sem{NFT}: A Semantically Enhanced Decentralized Middleware for Digital Asset Immortality}, author={Lehao Lin and Hong KANG and Xinyao Sun and Wei Cai}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=HtdBUEbl4X} }
Non-Fungible Tokens (NFTs) have emerged as a pivotal digital asset, offering authenticated ownership of unique digital content. Despite it has gained remarkable traction, yet face pressing storage and verification challenges stemming from blockchain's permanent data costs. Existing off-chain or centralized storage solutions, while being alternatives, also introduce notable security vulnerabilities. We present SemNFT, an innovative decentralized framework integrated with blockchain oracle middleware services, addressing these persistent NFT dilemmas. Our approach compresses NFT source data into compact embeddings encapsulating semantic essence. These arrays are stored on-chain, while facilitating reliable decentralized image reconstruction and ownership verification. We implemented ERC721-compliant smart contracts with supplementary functionalities, demonstrating SemNFT’s seamless integrative capabilities within the ecosystem. Extensive evaluations evidence marked storage optimizations and preservation of requisite visual fidelity by comparison with existing solutions. The proposed SemNFT framework marks a significant advancement in holistically confronting rising NFT storage and verification challenges without compromising decentralization. It substantively propels the meaningful evolution of NFT infrastructure to achieve digital asset immortality.
SemNFT: A Semantically Enhanced Decentralized Middleware for Digital Asset Immortality
[ "Lehao Lin", "Hong KANG", "Xinyao Sun", "Wei Cai" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Hs2lECzJi9
@inproceedings{ zhong2024towards, title={Towards Low-latency Event-based Visual Recognition with Hybrid Step-wise Distillation Spiking Neural Networks}, author={Xian Zhong and Shengwang Hu and Wenxuan Liu and Wenxin Huang and Jianhao Ding and Zhaofei Yu and Tiejun Huang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Hs2lECzJi9} }
Spiking neural networks (SNNs) have garnered significant attention for their low power consumption and high biological interpretability. Their rich spatio-temporal information processing capability and event-driven nature make them ideally well-suited for neuromorphic datasets. However, current SNNs struggle to balance accuracy and latency in classifying these datasets. In this paper, we propose Hybrid Step-wise Distillation (HSD) method, tailored for neuromorphic datasets, to mitigate the notable decline in performance at lower time steps. Our work disentangles the dependency between the number of event frames and the time steps of SNNs, utilizing more event frames during the training stage to improve performance, while using fewer event frames during the inference stage to reduce latency. Nevertheless, the average output of SNNs across all time steps is susceptible to individual time step with abnormal outputs, particularly at extremely low time steps. To tackle this issue, we implement Step-wise Knowledge Distillation (SKD) module that considers variations in the output distribution of SNNs at each time step. Empirical evidence demonstrates that our method yields competitive performance in classification tasks on neuromorphic datasets, especially at lower time steps.
Towards Low-latency Event-based Visual Recognition with Hybrid Step-wise Distillation Spiking Neural Networks
[ "Xian Zhong", "Shengwang Hu", "Wenxuan Liu", "Wenxin Huang", "Jianhao Ding", "Zhaofei Yu", "Tiejun Huang" ]
Conference
poster
2409.12507
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Hru0C1VbfM
@inproceedings{ shi2024hiner, title={{HINER}: Neural Representation for Hyperspectral Image}, author={Junqi Shi and Mingyi Jiang and Ming Lu and Tong Chen and Xun Cao and Zhan Ma}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Hru0C1VbfM} }
As a prevalent scientific data format with extensive applications, the efficient compression of hyperspectral images (HSI) and ensuring high-quality downstream tasks have garnered significant attention. This paper introduces HINER, a novel approach for compressing HSI using Neural Representation. HINER fully exploits inter-spectral correlations by explicitly encoding of spectral wavelengths and achieves a compact representation of the input HSI sample through joint optimization with a learnable decoder. By additionally incorporating the Content Angle Mapper with the L1 loss, we can supervise the global and local information within each spectral band, thereby enhancing the overall reconstruction quality. For downstream classification on compressed HSI, we theoretically demonstrate the task accuracy is not only related to the classification loss but also to the reconstruction fidelity through a first-order expansion of the accuracy degradation, and accordingly adapt the reconstruction by introducing Adaptive Spectral Weighting. Owing to the inherent capability of HINER to implicitly reconstruct spectral bands using input wavelengths, it can generate arbitrary continuous spectra, even those absent in the original input. Consequently, we propose utilizing Implicit Spectral Interpolation for data augmentation during classification model training, thereby improving overall task accuracy on compressed data. Experimental results on various HSI datasets demonstrate the superior compression performance of our HINER compared to the existing learned methods and also the traditional codecs. Our model is lightweight and computationally efficient, which maintains high accuracy for downstream classification task even on decoded HSIs at high compression ratios.
HINER: Neural Representation for Hyperspectral Image
[ "Junqi Shi", "Mingyi Jiang", "Ming Lu", "Tong Chen", "Xun Cao", "Zhan Ma" ]
Conference
poster
2407.21395
[ "https://github.com/eric-qi/hiner" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Hr5cO79Ajw
@inproceedings{ yang2024synctalklip, title={SyncTalklip: Highly Synchronized Lip-Readable Speaker Generation with Multi-Task Learning}, author={Xiaoda Yang and Xize Cheng and Dongjie Fu and Minghui Fang and Jialung Zuo and Shengpeng Ji and Tao Jin and Zhou Zhao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Hr5cO79Ajw} }
Talking Face Generation (TFG) reconstructs facial motions concerning lips given speech input, which aims to generate high-quality, synchronized, and lip-readable videos. Previous efforts have achieved success in generating quality and synchronization, and recently, there has been an increasing focus on the importance of intelligibility. Despite these efforts, there remains a challenge in achieving a balance among quality, synchronization, and intelligibility, often resulting in trade-offs that compromise one aspect in favor of another. In light of this, we propose SyncTalklip, a novel dual-tower framework designed to overcome the challenges of synchronization while improving lip-reading performance. To enhance the performance of SyncTalklip in both synchronization and intelligibility, we design AV-SyncNet, a pre-trained multi-task model, aiming to achieve a dual-focus on synchronization and intelligibility. Moreover, we propose a novel cross-modal contrastive learning bringing audio and video closer to enhance synchronization. Experimental results demonstrate that SyncTalklip achieves state-of-the-art performance in quality, intelligibility, and synchronization. Furthermore, extensive experiments have demonstrated our model's generalizability across domains. The code and demo is available at \url{https://sync-talklip.github.io }.
SyncTalklip: Highly Synchronized Lip-Readable Speaker Generation with Multi-Task Learning
[ "Xiaoda Yang", "Xize Cheng", "Dongjie Fu", "Minghui Fang", "Jialung Zuo", "Shengpeng Ji", "Tao Jin", "Zhou Zhao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=HpgRXzSVMR
@inproceedings{ guo2024a, title={A Progressive Skip Reasoning Fusion Method for Multi-Modal Classification}, author={Qian Guo and Xinyan Liang and Yuhua Qian and Zhihua Cui and Jie Wen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=HpgRXzSVMR} }
In multi-modal classification tasks, a good fusion algorithm can effectively integrate and process multi-modal data, thereby significantly improving its performance. Researchers often focus on the design of complex fusion operators and have proposed numerous fusion operators, while paying less attention to the design of feature fusion usage, specifically how features should be fused to better facilitate multi-modal classification tasks. In this article, we propose a progressive skip reasoning fusion network (PSRFN) to make some attempts to address this issue. Firstly, unlike most existing multi-modal fusion methods that only use one fusion operator in a single stage to fuse all view features, PSRFN utilizes the progressive skip reasoning (PSR) block to fuse all views with a fusion operator at each layer. Specifically, each PSR block utilizes all view features and the fused features from the previous layer to jointly obtain the fused features for the current layer. Secondly, each PSR block utilizes a dual-weighted fusion strategy with learnable parameters to adaptively allocate weights during the fusion process. The first level of weighting assigns weights to each view feature, while the second level assigns weights to the fused features from the previous layer and the fused features obtained from the first level of weighting in the current layer. This strategy ensures that the PSR block can dynamically adjust the weights based on the actual contribution of features. Finally, to enable the model to fully utilize feature information from different levels for feature fusion, the skip connections are adopted between PSR blocks employing them. Extensive experiment results on six real multi-modal datasets show that a better usage for fusion operator is indeed able to improve performance.
A Progressive Skip Reasoning Fusion Method for Multi-Modal Classification
[ "Qian Guo", "Xinyan Liang", "Yuhua Qian", "Zhihua Cui", "Jie Wen" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=HbIxwfMJ4W
@inproceedings{ yi2024learning, title={Learning Spectral-decomposited Tokens for Domain Generalized Semantic Segmentation}, author={Jingjun Yi and Qi Bi and Hao Zheng and Haolan Zhan and Wei Ji and Yawen Huang and Yuexiang Li and Yefeng Zheng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=HbIxwfMJ4W} }
The rapid development of Vision Foundation Model (VFM) brings superior out-domain generalization for a variety of down-stream tasks. Among them, domain generalized semantic segmentation (DGSS) holds unique challenges as the cross-domain images share common pixel-wise content information (i.e., semantics) but vary greatly in terms of the style variation (e.g., urban landscape, environment dependencies). How to effectively fine-tune VLM for DGSS has recently become an open research topic for the vision community. In this paper, we present a novel Spectral-decomposited Tokens (SET) learning framework to push the frontier. Delving into further than existing fine-tuning token & frozen backbone paradigm, the proposed SET especially focuses on how to learn style-invariant features from these learnable tokens. Specifically, the frozen VLM features are first decomposited into the phase and amplitude component respectively in the frequency space, where the phase / amplitude component reflects more on the content / style, respectively. Then, learnable tokens are adapted to learn the content and style, respectively. As the cross-domain differences mainly rest in the style from the amplitude component, such information is decoupled from the tokens. Consequently, the refined feature maps are more stable to represent the pixel-wise content despite the style variation. Extensive cross-domain experiments under a variety of backbones and VFMs show the state-of-the-art performance. We will make the source code publicly available.
Learning Spectral-decomposited Tokens for Domain Generalized Semantic Segmentation
[ "Jingjun Yi", "Qi Bi", "Hao Zheng", "Haolan Zhan", "Wei Ji", "Yawen Huang", "Yuexiang Li", "Yefeng Zheng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Hb0i9E6acF
@inproceedings{ han2024adaid, title={Ada-iD: Active Domain Adaption for Intrusion Detection}, author={Fujun Han and Peng Ye and Shukai Duan and Lidan Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Hb0i9E6acF} }
Vision-based intrusion detection has many applications in life environments, e.g., security, intelligent monitoring, and autonomous driving. Previous works improve the performance of intrusion detection under unknown environments by introducing unsupervised domain adaption (UDA) methods. However, these works do not fully fulfill the practical requirements due to the performance gap between UDA and fully supervised methods. To address the problem, we develop a new and vital active domain adaption intrusion detection task, namely ADA-ID. Our aim is to query and annotate the most informative samples of the target domain at the lowest possible cost, striving for a balance between achieving high performance and keeping low annotation expenses. Specifically, we propose a multi-task joint active domain adaption intrusion detection framework, namely ADAID-YOLO. It consists of a lower branch for detection and an upper branch for segmentation. Further, three effective strategies are designed to better achieve the ADA-ID task: 1) An efficient Dynamic Diffusion Pseudo-Labeling method (DDPL) is introduced to get Pseudo ground truth to help identify areas of uncertainty in segmentation. 2) A Enhanced Region Impurity and Prediction Uncertainty sampling strategy (Enhanced-RIPU) is proposed to better capture the uncertainty of the segmentation region. 3) A Multi-Element Joint sampling strategy (MEJ) is designed to calculate the uncertainty of the detection comprehensively. Finally, comprehensive experiments and comparisons are conducted on multiple dominant intrusion detection datasets. The results show that our method can outperform other classic and promising active domain adaption methods and reach current SOTA performance, even surpassing the performance of UDA and full supervision on Normal→Foggy with only 0.1% and 10% data annotation, respectively. All the source codes, and trained models will be public.
Ada-iD: Active Domain Adaption for Intrusion Detection
[ "Fujun Han", "Peng Ye", "Shukai Duan", "Lidan Wang" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=HacSqd6Yw6
@inproceedings{ yin2024sibivit, title={{SI}-BiViT: Binarizing Vision Transformers with Spatial Interaction}, author={Peng Yin and Xiaosu Zhu and Jingkuan Song and Lianli Gao and Heng Tao Shen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=HacSqd6Yw6} }
Binarized Vision Transformers (BiViTs) aim to facilitate the efficient and lightweight utilization of Vision Transformers (ViTs) on devices with limited computational resources. Yet, the current approach to binarizing ViT leads to a substantial performance decrease compared to the full-precision model, posing obstacles to practical deployment. By empirical study, we reveal that spatial interaction (SI) is a critical factor that impacts performance due to lack of token-level correlation, but previous work ignores this factor. To this end, we design a ViT binarization approach dubbed SI-BiViT to incorporate spatial interaction in the binarization process. Specifically, an SI module is placed alongside the Multi-Layer Perceptron (MLP) module to formulate the dual-branch structure. This structure not only leverages knowledge from pre-trained ViTs by distilling over the original MLP, but also enhances spatial interaction via the introduced SI module. Correspondingly, we design a decoupled training strategy to train these two branches more effectively. Importantly, our SI-BiViT is orthogonal to existing Binarized ViTs approaches and can be directly plugged. Extensive experiments demonstrate the strong flexibility and effectiveness of SI-BiViT by plugging our method into four classic ViT backbones in supporting three downstream tasks, including classification, detection, and segmentation. In particular, SI-BiViT enhances the classification performance of binarized ViTs by an average of 10.52\% in Top-1 accuracy compared to the previous state-of-the-art. The code will be made publicly available.
SI-BiViT: Binarizing Vision Transformers with Spatial Interaction
[ "Peng Yin", "Xiaosu Zhu", "Jingkuan Song", "Lianli Gao", "Heng Tao Shen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=HYvQnAvrQB
@inproceedings{ li2024efficient, title={Efficient Dual-Confounding Eliminating for Weakly-supervised Temporal Action Localization}, author={Ao Li and Huijun Liu and Jinrong Sheng and Zhongming Chen and Yongxin Ge}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=HYvQnAvrQB} }
Weakly-supervised Temporal Action Localization (WTAL) following a localization-by-classification paradigm has achieved significant results, yet still grapples with confounding arising from ambiguous snippets. Previous works have attempted to distinguish these ambiguous snippets from action snippets without investigating the underlying causes of their formation, thus failing to effectively eliminate the bias on both action-context and action-content. In this paper, we revisit WTAL from the perspective of structural causal model to identify the true origins of confounding, and propose an efficient dual-confounding eliminating framework to alleviate these biases. Specifically, we construct a Substituted Confounder Set (SCS) to eliminate the confounding bias on action-content by leveraging the modal disparity between RGB and FLOW. Then, a Multi-level Consistency Mining (MCM) method is designed to mitigate the confounding bias on action-content by utilizing the consistency between discriminative snippets and corresponding proposals at both the feature and label levels. Notably, SCS and MCM could be seamlessly integrated into any two-stream models without additional parameters by Expectation-Maximization (EM) algorithm. Extensive experiments on two challenging benchmarks including THUMOS14 and ActivityNet-1.2 demonstrate the superior performance of our method.
Efficient Dual-Confounding Eliminating for Weakly-supervised Temporal Action Localization
[ "Ao Li", "Huijun Liu", "Jinrong Sheng", "Zhongming Chen", "Yongxin Ge" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=HTZy9hpoYV
@inproceedings{ oncescu2024dissecting, title={Dissecting Temporal Understanding in Text-to-Audio Retrieval}, author={Andreea-Maria Oncescu and Joao F. Henriques and A. Sophia Koepke}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=HTZy9hpoYV} }
Recent advancements in machine learning have fueled research on multimodal interactions, such as for instance text-to-video and text-to-audio retrieval tasks. These tasks require models to understand the semantic content of input videos, including objects, sounds and characters. The models also need to learn their spatial arrangement and the temporal relationships of sounds. In this work, we tackle the temporal ordering of sounds, which is an understudied problem in the context of text-to-audio retrieval. In particular, we dissect the temporal understanding capabilities of a state-of-the-art model for text-to-audio retrieval on the AudioCaps dataset. Additionally, we introduce a synthetic text-audio dataset that provides a controlled setting for evaluating the temporal understanding of recent models. Lastly, we investigate a new loss function that encourages text-audio models to focus on the temporal ordering of events.
Dissecting Temporal Understanding in Text-to-Audio Retrieval
[ "Andreea-Maria Oncescu", "Joao F. Henriques", "A. Sophia Koepke" ]
Conference
poster
2409.00851
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=HMSZ5XKKlB
@inproceedings{ ge2024towards, title={Towards End-to-End Explainable Facial Action Unit Recognition via Vision-Language Joint Learning}, author={Xuri Ge and Junchen Fu and Fuhai Chen and Shan An and Nicu Sebe and Joemon M. Jose}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=HMSZ5XKKlB} }
Facial action units (AUs), as defined in the Facial Action Coding System (FACS), have received significant research interest owing to their diverse range of applications in facial state analysis. Current mainstream FAU recognition models have a notable limitation, i.e., focusing only on the accuracy of AU recognition and overlooking explanations of corresponding AU states. In this paper, we propose an end-to-end Vision-Language joint learning network for explainable FAU recognition (termed VL-FAU), which aims to reinforce AU representation capability and language interpretability through the integration of joint multimodal tasks. Specifically, VL-FAU brings together language models to generate fine-grained local muscle descriptions and distinguishable global face description when optimising FAU recognition. Through this, the global facial representation and its local AU representations will achieve higher distinguishability among different AUs and different subjects. In addition, multi-level AU representation learning is utilised to improve AU individual attention-aware representation capabilities based on the multi-scale combined facial stem feature. Extensive experiments on DISFA and BP4D AU datasets show that the proposed approach achieves superior performance over the state-of-the-art methods on most of the metrics. In addition, compared with mainstream FAU recognition methods, VL-FAU can provide local- and global-level interpretability language descriptions with the AUs' predictions.
Towards End-to-End Explainable Facial Action Unit Recognition via Vision-Language Joint Learning
[ "Xuri Ge", "Junchen Fu", "Fuhai Chen", "Shan An", "Nicu Sebe", "Joemon M. Jose" ]
Conference
poster
2408.00644
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=HJa79aWcS0
@inproceedings{ zhu2024dualfed, title={DualFed: Enjoying both Generalization and Personalization in Federated Learning via Hierachical Representations}, author={Guogang Zhu and Xuefeng Liu and Jianwei Niu and Shaojie Tang and Xinghao Wu and Jiayuan Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=HJa79aWcS0} }
In personalized federated learning (PFL), it is widely recognized that achieving both high model generalization and effective personalization poses a significant challenge due to their conflicting nature. As a result, existing PFL methods can only manage a trade-off between these two objectives. This raises an interesting question: Is it feasible to develop a model capable of achieving both objectives simultaneously? Our paper presents an affirmative answer, and the key lies in the observation that deep models inherently exhibit hierarchical architectures, which produce representations with various levels of generalization and personalization at different stages. A straightforward approach stemming from this observation is to select multiple representations from these layers and combine them to concurrently achieve generalization and personalization. However, the number of candidate representations is commonly huge, which makes this method infeasible due to high computational costs. To address this problem, we propose DualFed, a new method that can directly yield dual representations correspond to generalization and personalization respectively, thereby simplifying the optimization task. Specifically, DualFed inserts a personalized projection network between the encoder and classifier. The pre-projection representations are able to capture generalized information shareable across clients, and the post-projection representations are effective to capture task-specific information on local clients. This design minimizes the mutual interference between generalization and personalization, thereby achieving a win-win situation. Extensive experiments show that DualFed can outperform other FL methods.
DualFed: Enjoying both Generalization and Personalization in Federated Learning via Hierachical Representations
[ "Guogang Zhu", "Xuefeng Liu", "Jianwei Niu", "Shaojie Tang", "Xinghao Wu", "Jiayuan Zhang" ]
Conference
poster
2407.17754
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=HHzHRuIyaW
@inproceedings{ zhang2024making, title={Making Large Language Models Perform Better in Knowledge Graph Completion}, author={Yichi Zhang and Zhuo Chen and Lingbing Guo and yajing Xu and Wen Zhang and Huajun Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=HHzHRuIyaW} }
Large language model (LLM) based knowledge graph completion (KGC) aims to predict the missing triples in the KGs with LLMs. However, research about LLM-based KGC fails to sufficiently harness LLMs' inference proficiencies, overlooking critical structural information integral to KGs. In this paper, we explore methods to incorporate structural information into the LLMs, with the overarching goal of facilitating structure-aware reasoning. We first discuss on the existing LLM paradigms like in-context learning and instruction tuning, proposing basic structural information injection approaches. Then we propose a Kno}wledge Prefix Adapter (KoPA) to fulfill this stated goal. The KoPA uses a structural pre-training phase to comprehend the intricate entities and relations within KGs, representing them as structural embeddings. Then KoPA communicates such **cross-modal structural information understanding to the LLMs** through a knowledge prefix adapter which projects the structural embeddings into the textual space and obtains virtual knowledge tokens positioned as a prefix of the input prompt. We conduct comprehensive experiments and provide incisive analysis concerning how the introduction of cross-modal structural information would be better for LLM's factual knowledge reasoning ability. Our code and data are available at https://anonymous.4open.science/r/KoPA-3415.
Making Large Language Models Perform Better in Knowledge Graph Completion
[ "Yichi Zhang", "Zhuo Chen", "Lingbing Guo", "yajing Xu", "Wen Zhang", "Huajun Chen" ]
Conference
oral
2310.06671
[ "https://github.com/zjukg/kopa" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=HGjJkNH8Pu
@inproceedings{ woo2024let, title={Let Me Finish My Sentence: Video Temporal Grounding with Holistic Text Understanding}, author={Jongbhin Woo and Hyeonggon Ryu and Youngjoon Jang and Jae Won Cho and Joon Son Chung}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=HGjJkNH8Pu} }
Video Temporal Grounding (VTG) aims to identify visual frames in a video clip that match text queries. Recent studies in VTG employ cross-attention to correlate visual frames and text queries as individual token sequences. However, these approaches overlook a crucial aspect of the problem: a holistic understanding of the query sentence. A model may capture correlations between individual word tokens and arbitrary visual frames while possibly missing out on the global meaning. To address this, we introduce two primary contributions: (1) a visual frame-level gate mechanism that incorporates holistic textual information, (2) cross-modal alignment loss to learn the fine-grained correlation between query and relevant frames. As a result, we regularize the effect of individual word tokens and suppress irrelevant visual frames. We demonstrate that our method outperforms state-of-the-art approaches in VTG benchmarks, indicating that holistic text understanding guides the model to focus on the semantically important parts within the video.
Let Me Finish My Sentence: Video Temporal Grounding with Holistic Text Understanding
[ "Jongbhin Woo", "Hyeonggon Ryu", "Youngjoon Jang", "Jae Won Cho", "Joon Son Chung" ]
Conference
poster
2410.13598
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=HDM47gpvq6
@inproceedings{ chen2024learning, title={Learning to Correction: Explainable Feedback Generation for Visual Commonsense Reasoning Distractor}, author={Jiali Chen and Xusen Hei and Yuqi Xue and Yuancheng Wei and Jiayuan Xie and Yi Cai and Qing Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=HDM47gpvq6} }
Large multimodal models (LMMs) have shown remarkable performance in the visual commonsense reasoning (VCR) task, which aims to answer a multiple-choice question based on visual commonsense within an image. However, the ability of LMMs to correct potential visual commonsense errors in the distractor upon their occurrence is yet under-explored. Drawing inspiration from how a human teacher crafts challenging distractors to test students' comprehension of the concepts or skills and assists them in identifying and correcting errors toward the answer, we are the pioneering research for LMMs to simulate this error correction learning process. To this end, we employ GPT-4 as a ``teacher'' to collect the explainable feedback dataset VCR-DF for error correction, which serves as a benchmark to evaluate the ability of LMMs to identify misconceptions and clarify reasons behind the error in VCR distractors toward final answers. In addition, we propose an LMM-based Pedagogical Expert Instructed Feedback Generation (PEIFG) model to incorporate the learnable expert prompts and multimodal instruction as guidance for feedback generation. Experimental results show that our PEIFG significantly outperforms existing LMMs. We believe our benchmark carves out a new direction for evaluating the capabilities of LMMs.
Learning to Correction: Explainable Feedback Generation for Visual Commonsense Reasoning Distractor
[ "Jiali Chen", "Xusen Hei", "Yuqi Xue", "Yuancheng Wei", "Jiayuan Xie", "Yi Cai", "Qing Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=H7etFJugLW
@inproceedings{ su2024amgembedding, title={{AMG}-Embedding: a Self-Supervised Embedding Approach for Audio Identification}, author={Yuhang Su and Wei Hu and Fan Zhang and Qiming Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=H7etFJugLW} }
Audio Identification aims to precisely retrieve exact matches from a vast music repository through a query audio snippet. The need for specificity and granularity has traditionally led to representing music audio using numerous short fixed-duration overlapped segment/shingle features in fingerprinting approaches. However, fingerprinting imposes constraints on scalability and efficiency, as hundreds or even thousands of embeddings are generated to represent a typical music audio. In this paper, we present an innovative self-supervised approach called Angular Margin Guided Embedding (AMG-Embedding). AMG-Embedding is built on a traditional fingerprinting encoder and aims to represent variable-duration non-overlapped segments as embeddings through a two-stage embedding and class-level learning process. AMG-Embedding significantly reduces the number of generated embeddings while achieving high-specific fragment-level audio identification simultaneously. Experimental results demonstrate that AMG-Embedding achieves retrieval accuracy comparable to the based fingerprinting approach while consuming less than $1/10th$ of its storage and retrieval time. The efficiency gains of our approach position it as a promising solution for scalable and efficient audio identification systems.
AMG-Embedding: a Self-Supervised Embedding Approach for Audio Identification
[ "Yuhang Su", "Wei Hu", "Fan Zhang", "Qiming Xu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=H4E32wjshc
@inproceedings{ zhang2024dragentitytrajectory, title={DragEntity:Trajectory Guided Video Generation using Entity and Positional Relationships}, author={Wan Zhang and Sheng Tang and Jiawei Wei and Ruize Zhang and Juan Cao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=H4E32wjshc} }
In recent years, diffusion models have achieved tremendous success in the field of video generation, with controllable video generation receiving significant attention. However, existing control methods still face two limitations: Firstly, control conditions (such as depth maps, 3D Mesh) are difficult for ordinary users to obtain directly. Secondly, it’s challenging to drive multiple objects through complex motions with multiple trajectories simultaneously. In this paper, we introduce DragEntity, a video generation model that utilizes entity representation for controlling the motion of multiple objects. In comparison to previous methods, MotionCtrl offers two main advantages: 1) Trajectory-based methods are more user-friendly for interaction. Users only need to draw trajectories during the interaction to generate videos. 2) We use entity representation to represent any object in the image, and multiple objects can maintain relative spatial relationships. Therefore, we allow multiple trajectories to control multiple objects in the image with different levels of complexity simultaneously. Our experiments validate the effectiveness of DragEntity, demonstrating its superior performance in fine-grained control in video generation.
DragEntity:Trajectory Guided Video Generation using Entity and Positional Relationships
[ "Wan Zhang", "Sheng Tang", "Jiawei Wei", "Ruize Zhang", "Juan Cao" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=H0nsCOuT7m
@inproceedings{ song2024magiccartoon, title={MagicCartoon: 3D Pose and Shape Estimation for Bipedal Cartoon Characters}, author={Yu-Pei Song and Yuan-Tong Liu and Xiao Wu and Qi He and Zhaoquan Yuan and Ao Luo}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=H0nsCOuT7m} }
The 3D model can be estimated by regressing the pose and shape parameters from the image data of the digital model. The reconstruction of 3D cartoon characters poses a challenging task due to diverse visual representations and postural variations. This paper proposes a dual-branch structure named MagicCartoon for 3D bipedal cartoon character estimation, which models pose and shape independently through feature decoupling. Considering the correlation between category difference and shape parameters, a hybrid feature fusion technique is introduced, which integrates the global features of the original image with the corresponding local features expressed by the puzzle image, reducing the abstractness of understanding shape parameter differences. To semantically align image and geometric between feature space, a geometric-guided feedback loop is proposed in an iterative way, so that the pose of modeling results can be expressed consistently with the image. Moreover, a feature consistency loss is designed to augment the training data by incorporating the same character with different postures and the same posture of different characters. It enhances the correlation between the features extracted by the backbone network and the specific task. Experiments conducted on the 3DBiCar dataset demonstrate that MagicCartoon outperforms the state-of-the-art methods.
MagicCartoon: 3D Pose and Shape Estimation for Bipedal Cartoon Characters
[ "Yu-Pei Song", "Yuan-Tong Liu", "Xiao Wu", "Qi He", "Zhaoquan Yuan", "Ao Luo" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Gt3a8A1wLg
@inproceedings{ xu2024leveraging, title={Leveraging Knowledge of Modality Experts for Incomplete Multimodal Learning}, author={Wenxin Xu and Hexin Jiang and xuefeng liang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Gt3a8A1wLg} }
Multimodal Emotion Recognition (MER) may encounter incomplete multimodal scenarios caused by sensor damage or privacy protection in practical applications. Existing incomplete multimodal learning methods focus on learning better joint representations across modalities. However, our investigation shows that they are lacking in learning the unimodal representations which are rather discriminative as well. Instead, we propose a novel framework named Mixture of Modality Knowledge Experts (MoMKE) with two-stage training. In unimodal expert training, each expert learns the unimodal knowledge from the corresponding modality. In experts mixing training, both unimodal and joint representations are learned by leveraging the knowledge of all modality experts. In addition, we design a special Soft Router that can enrich the modality representations by dynamically mixing the unimodal representations and the joint representations. Various incomplete multimodal experiments on three benchmark datasets showcase the robust performance of MoMKE, especially on severely incomplete conditions. Visualization analysis further reveals the considerable value of unimodal and joint representations.
Leveraging Knowledge of Modality Experts for Incomplete Multimodal Learning
[ "Wenxin Xu", "Hexin Jiang", "xuefeng liang" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Gl3a5nusJP
@inproceedings{ liu2024fmclip, title={{FM}-{CLIP}: Flexible Modal {CLIP} for Face Anti-Spoofing}, author={Ajian Liu and Ma Hui and Junze Zheng and Haocheng Yuan and Xiaoyuan Yu and Yanyan Liang and Sergio Escalera and Jun Wan and Zhen Lei}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Gl3a5nusJP} }
Flexible modal Face Anti-spoofing (FAS) aims to aggregate all the available training modalities’ data to train a model, and enables flexible testing of any given modal samples. Previous works introduce shared cross-modal transformers (attentions) to facilitate the learning of modality-agnostic features, which inevitably leads to the distortion of feature structures and achieves limited performance. In this work, borrowing a solution from the large-scale vision-language models (VLMs) instead of directly removing modality-specific signals from visual features, we propose a novel Flexible Modal CLIP (\textbf{FM-CLIP}) for flexible modal FAS, that can utilize text features to dynamically adjust visual features to be modality independent. In the visual branch, considering the huge visual differences of the same attack in different modalities, which makes it difficult for classifiers to flexibly identify subtle spoofing clues in different test modalities, we propose Cross-Modal Spoofing Enhancer (\textbf{CMS-Enhancer}). It includes a Frequency Extractor (\textbf{FE}) and Cross-Modal Interactor (\textbf{CMI}), aiming to map different modal attacks in a shared frequency space to reduce interference from modality-specific signals and enhance spoofing clues by leveraging cross modal learning from the shared frequency space. In the text branch, we introduce a Language-Guided Patch Alignment (\textbf{LGPA}) based on the prompt learning, which further guides the image encoder to focus on patch level spoofing representations through dynamic weighting by text features. Thus, our FM-CLIP can flexibly test different modal samples by identifying and enhancing modality-agnostic spoofing cues. Finally, extensive experiments show that FM-CLIP is effective and outperforms state-of-the-art methods on multiple multi-modal datasets.
FM-CLIP: Flexible Modal CLIP for Face Anti-Spoofing
[ "Ajian Liu", "Ma Hui", "Junze Zheng", "Haocheng Yuan", "Xiaoyuan Yu", "Yanyan Liang", "Sergio Escalera", "Jun Wan", "Zhen Lei" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=GjtQKXxv2G
@inproceedings{ wu2024rdlnet, title={{RDLN}et: A Novel and Accurate Real-world Document Localization Method}, author={Yaqiang Wu and Zhen Xu and Yong Duan and Yanlai Wu and Qinghua Zheng and Hui Li and Xiaochen Hu and Lianwen Jin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=GjtQKXxv2G} }
The increasing use of smartphones for capturing documents in various real-world conditions has underscored the need for robust document localization technologies. Current challenges in this domain include handling diverse document types, complex backgrounds, and varying photographic conditions such as low contrast and occlusion. However, there currently are no publicly available datasets containing these complex scenarios and few methods demonstrate their capabilities on these complex scenes. To address these issues, we create a new comprehensive real-world document localization benchmark dataset which contains the complex scenarios mentioned above and propose a novel Real-world Document Localization Network (RDLNet) for locating targeted documents in the wild. The RDLNet consists of an innovative light-SAM encoder and a masked attention decoder. Utilizing light-SAM encoder, the RDLNet transfers the mighty generalization capability of SAM to the document localization task. In the decoding stage, the RDLNet exploits the masked attention and object query method to efficiently output the triple-branch predictions consisting of corner point coordinates, instance-level segmentation area and categories of different documents without extra post-processing. We compare the performance of RDLNet with other state-of-the-art approaches for real-world document localization on multiple benchmarks, the results of which reveal that the RDLNet remarkably outperforms contemporary methods, demonstrating its superiority in terms of both accuracy and practicability.
RDLNet: A Novel and Accurate Real-world Document Localization Method
[ "Yaqiang Wu", "Zhen Xu", "Yong Duan", "Yanlai Wu", "Qinghua Zheng", "Hui Li", "Xiaochen Hu", "Lianwen Jin" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=GjmjOCYntQ
@inproceedings{ guo2024magicvfx, title={Magic{VFX}: Visual Effects Synthesis in Just Minutes}, author={Jiaqi Guo and Lianli Gao and Junchen Zhu and JiaxinZhang and Siyang Li and Jingkuan Song}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=GjmjOCYntQ} }
Visual effects synthesis is crucial in the film and television industry, which aims at enhancing raw footage with virtual elements for greater expressiveness. As the demand for detailed and realistic effects escalates in modern production, professionals are compelled to allocate substantial time and resources to this endeavor. Thus, there is an urgent need to explore more convenient and less resource-intensive methods, such as incorporating the burgeoning Artificial Intelligence Generated Content (AIGC) technology. However, research into this potential integration has yet to be conducted. As the first work to establish a connection between visual effects synthesis and AIGC technology, we start by carefully setting up two paradigms according to the need for pre-produced effects or not: synthesis with reference effects and synthesis without reference effects. Following this, we compile a dataset by processing a collection of effects videos and scene videos, which contains a wide variety of effect categories and scenarios, adequately covering the common effects seen in films and television industry. Furthermore, we explore the capabilities of a pre-trained text-to-video model to synthesize visual effects within these two paradigms. The experimental results demonstrate that the pipeline we established can effectively produce impressive visual effects synthesis outcomes, thereby evidencing the significant potential of existing AIGC technology for application in visual effects synthesis tasks. Our dataset can be found in https://github.com/ruffiann/MagicVFX.
MagicVFX: Visual Effects Synthesis in Just Minutes
[ "Jiaqi Guo", "Lianli Gao", "Junchen Zhu", "JiaxinZhang", "Siyang Li", "Jingkuan Song" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Gj27LxfzXd
@inproceedings{ mai2024all, title={All rivers run into the sea: Unified Modality Brain-Inspired Emotional Central Mechanism}, author={Xinji Mai and Junxiong Lin and Haoran Wang and Zeng Tao and Yan Wang and Shaoqi Yan and Xuan Tong and Jiawen Yu and Boyang Wang and Ziheng Zhou and Qing Zhao and Shuyong Gao and Wenqiang Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Gj27LxfzXd} }
In the field of affective computing, fully leveraging information from a variety of sensory modalities is essential for the comprehensive understanding and processing of human emotions. Inspired by the process through which the human brain handles emotions and the theory of cross-modal plasticity, we propose UMBEnet, a brain-like unified modal affective processing network. The primary design of UMBEnet includes a Dual-Stream (DS) structure that fuses intrinsic prompts with a Prompt Pool and a Sparse Feature Fusion (SFF) module. The design of the Prompt Pool is aimed at integrating information from different modalities, while intrinsic prompts are intended to enhance the system's predictive guidance capabilities and effectively manage knowledge related to emotion classification. Moreover, considering the sparsity of effective information across different modalities, the Sparse Feature Fusion module aims to make full use of all available sensory data through the sparse integration of modality fusion prompts and intrinsic prompts, maintaining high adaptability and sensitivity to complex emotional states. Extensive experiments on the largest benchmark datasets in the Dynamic Facial Expression Recognition(DFER) field, including DFEW, FERV39k, and MAFW, have proven that UMBEnet consistently outperforms the current state-of-the-art methods. Notably, in scenarios of modality absence and multimodal contexts, UMBEnet significantly surpasses the leading current methods, demonstrating outstanding performance and adaptability in tasks that involve complex emotional understanding with rich multimodal information.
All rivers run into the sea: Unified Modality Brain-Inspired Emotional Central Mechanism
[ "Xinji Mai", "Junxiong Lin", "Haoran Wang", "Zeng Tao", "Yan Wang", "Shaoqi Yan", "Xuan Tong", "Jiawen Yu", "Boyang Wang", "Ziheng Zhou", "Qing Zhao", "Shuyong Gao", "Wenqiang Zhang" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Gf94NMabeh
@inproceedings{ liu2024dysarl, title={DySarl: Dynamic Structure-Aware Representation Learning for Multimodal Knowledge Graph Reasoning}, author={Kangzheng Liu and Feng Zhao and Yu Yang and Guandong Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Gf94NMabeh} }
Multimodal knowledge graph (MKG) reasoning has attracted significant attention since impressive performance has been achieved by adding multimodal auxiliary information (i.e., texts and images) to the entities of traditional KGs. However, existing studies heavily rely on path-based methods for learning structural modality, failing to capture the complex structural interactions among multimodal entities beyond the reasoning path. In addition, existing studies have largely ignored the dynamic impact of different multimodal features on different decision facts for reasoning, which utilize asymmetric coattention to independently learn the static interplay between different modalities without dynamically joining the reasoning process. We propose a novel Dynamic Structure-aware representation learning method, namely DySarl, to overcome this problem and significantly improve the MKG reasoning performance. Specifically, we devise a dual-space multihop structural learning module in DySarl, aggregating the multihop structural features of multimodal entities via a novel message-passing mechanism. It integrates the message paradigms in Euclidean and hyperbolic spaces, effectively preserving the neighborhood information beyond the limited multimodal query paths. Furthermore, DySarl has an interactive symmetric attention module to explicitly learn the dynamic impacts of unimodal attention senders and multimodal attention targets on decision facts through a newly designed symmetric attention component and fact-specific gated attention unit, equipping DySarl with the dynamic associations between the multimodal feature learning and later reasoning. Extensive experiments show that DySarl achieves significantly improved reasoning performance on two public MKG datasets compared with that of the state-of-the-art baselines. Source codes are available at https://anonymous.4open.science/r/DySarl.
DySarl: Dynamic Structure-Aware Representation Learning for Multimodal Knowledge Graph Reasoning
[ "Kangzheng Liu", "Feng Zhao", "Yu Yang", "Guandong Xu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=GZuDmQvJ1f
@inproceedings{ teng2024enhancing, title={Enhancing Unsupervised Visible-Infrared Person Re-Identification with Bidirectional-Consistency Gradual Matching}, author={Xiao Teng and Xingyu Shen and Kele Xu and Long Lan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=GZuDmQvJ1f} }
Unsupervised visible-infrared person re-identification (USL-VI-ReID) is of great research and practical significance yet remains challenging due to significant modality discrepancy and lack of annotations. Many existing approaches utilize variants of bipartite graph global matching algorithms to address this issue, aiming to establish cross-modality correspondences. However, these methods may encounter mismatches due to significant modality gaps and limited model representation. To mitigate this, we propose a simple yet effective framework for USL-VI-ReID, which gradually establishes associations between different modalities. To measure the confidence whether samples from different modalities belong to the same identity, we introduce a bidirectional-consistency criterion, which not only considers direct relationships between samples from different modalities but also incorporates potential hard negative samples from the same modality. Additionally, we propose a cross-modality correlation preserving module to enhance the semantic representation of the model by maintaining consistency in correlations across modalities. Extensive experiments conducted on the public SYSU-MM01 and RegDB datasets demonstrate the superiority of our method over existing USL-VI-ReID approaches across various settings, despite its simplicity. Our code will be released.
Enhancing Unsupervised Visible-Infrared Person Re-Identification with Bidirectional-Consistency Gradual Matching
[ "Xiao Teng", "Xingyu Shen", "Kele Xu", "Long Lan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=GYomxff6HZ
@inproceedings{ shen2024revisiting, title={Revisiting Knowledge Tracing: A Simple and Powerful Model}, author={Xiaoxuan Shen and Fenghua Yu and yaqi Liu and Ruxia Liang and Qian Wan and Kai Yang and Jianwen Sun}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=GYomxff6HZ} }
Advances in multimedia technology and its widespread application in education have made multimedia learning increasingly important. Knowledge Tracing (KT) is the key technology for achieving adaptive multimedia learning, aiming to monitor the degree of knowledge acquisition and predict students' performance during the learning process. Current KT research is dedicated to enhancing the performance of KT problems by integrating the most advanced deep learning techniques. However, this has led to increasingly complex models, which reduce model usability and divert researchers' attention away from exploring the core issues of KT. This paper aims to tackle the fundamental challenges of KT tasks, including the knowledge state representation and the core architecture design, and investigate a novel KT model that is both simple and powerful. We have revisited the KT task and propose the ReKT model. First, taking inspiration from the decision-making process of human teachers, we model the knowledge state of students from three distinct perspectives: questions, concepts, and domains. Second, building upon human cognitive development models, such as constructivism, we have designed a Forget-Response-Update (FRU) framework to serve as the core architecture for the KT task. The FRU is composed of just two linear regression units, making it an extremely lightweight framework. Extensive comparisons were conducted with 22 state-of-the-art KT models on 7 publicly available datasets. The experimental results demonstrate that ReKT outperforms all the comparative methods in question-based KT tasks, and consistently achieves the best (in most cases) or near-best performance in concept-based KT tasks. Furthermore, in comparison to other KT core architectures like Transformers or LSTMs, the FRU achieves superior prediction performance with approximately only 38\% computing resources. Through an exploration of the ReKT model that is both simple and powerful, is able to offer new insights to future KT research. The code is in the supplementary materials.
Revisiting Knowledge Tracing: A Simple and Powerful Model
[ "Xiaoxuan Shen", "Fenghua Yu", "yaqi Liu", "Ruxia Liang", "Qian Wan", "Kai Yang", "Jianwen Sun" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=GVyxNEZLWE
@inproceedings{ zhang2024refscale, title={RefScale: Multi-temporal Assisted Image Rescaling in Repetitive Observation Scenarios}, author={Zhen Zhang and Jing Xiao and Liang Liao and Mi Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=GVyxNEZLWE} }
With the continuous development of imaging technology and the gradual expansion of the amount of image data, how to achieve high compression efficiency of high-resolution images is a challenge problem for storage and transmission. Image rescaling aims to reduce the original data amount through downscaling to facilitate data transmission and storage before encoding, and reconstruct the quality through upscaling after decoding, which is a key technology to assist in high-ratio image compression. However, existing rescaling approaches are more focused on reconstruction quality rather than image compressibility. In repetitive observation scenarios, multi-temporal images brought by periodic observations provide an opportunity to alleviate the conflict between reconstruction quality and compressibility, that is, the historical images as reference indicates what information can be dropped at downscaling to reduce the information content in downscaled image and provides the dropped information to improve the image restoration quality at upscaling. Based on this consideration, we propose a novel multi-temporal assisted reference-based image rescaling framework (RefScale). Specifically, a referencing network is proposed to calculate the similarity map to provide the referencing condition, which is then injected into the conditional invertible neural network to guide the information drop at the downscaling stage and information fusion at the upscaling stage. Additionally, a low-resolution guidance loss is proposed to further constrain the data amount of the downscaled LR image. Experiments conducted on both satellite imaging and autonomous driving show the superior performance of our approach over the state-of-the-art methods.
RefScale: Multi-temporal Assisted Image Rescaling in Repetitive Observation Scenarios
[ "Zhen Zhang", "Jing Xiao", "Liang Liao", "Mi Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=GUNmM4mc08
@inproceedings{ lv2024pickanddraw, title={Pick-and-Draw: Training-free Semantic Guidance for Text-to-Image Personalization}, author={Henglei Lv and Jiayu Xiao and Liang Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=GUNmM4mc08} }
Diffusion-based text-to-image personalization has achieved great success in generating user-specified subjects in various contexts. However, finetuning-based methods often suffer from model overfitting, leading to reduced generative diversity, particularly when the provided subject images are limited. To address this issue, we introduce Pick-and-Draw, a training-free semantic guidance approach that enhances identity consistency and generative diversity. Our method comprises two key components: appearance-picking guidance and layout-drawing guidance. In the appearance-picking phase, we create an appearance palette from visual features of the reference image, selecting local patterns to maintain consistent subject identity. In the layout-drawing phase, we use a generative template from the base diffusion model to sketch the subject shape and scene outline, leveraging its strong image prior to produce diverse contexts based on various text prompts. Pick-and-Draw can be seamlessly integrated with any personalized diffusion model and requires only a single reference image. Both qualitative and quantitative evaluations demonstrate that our approach significantly improves identity consistency and generative diversity, establishing a new Pareto frontier in the balance between subject fidelity and image-text alignment.
Pick-and-Draw: Training-free Semantic Guidance for Text-to-Image Personalization
[ "Henglei Lv", "Jiayu Xiao", "Liang Li" ]
Conference
poster
2401.16762
[ "" ]
https://huggingface.co/papers/2401.16762
0
2
0
4
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=GSmdnRqbpD
@inproceedings{ zhu2024data, title={Data Generation Scheme for Thermal Modality with Edge-Guided Adversarial Conditional Diffusion Model}, author={Guoqing Zhu and Honghu Pan and Qiang Wang and Chao Tian and Chao Yang and Zhenyu He}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=GSmdnRqbpD} }
In challenging low-light and adverse weather conditions, thermal vision algorithms, especially object detection, have exhibited remarkable potential, contrasting with the frequent struggles encountered by visible vision algorithms. Nevertheless, the efficacy of thermal vision algorithms driven by deep learning models remains constrained by the paucity of available training data samples. To this end, this paper introduces a novel approach termed the edge-guided conditional diffusion model (ECDM). This framework aims to produce meticulously aligned pseudo thermal images at the pixel level, leveraging edge information extracted from visible images. By utilizing edges as contextual cues from the visible domain, the diffusion model achieves meticulous control over the delineation of objects within the generated images. To alleviate the impacts of those visible-specific edge information that should not appear in the thermal domain, a two-stage modality adversarial training (TMAT) strategy is proposed to filter them out from the generated images by differentiating the visible and thermal modality. Extensive experiments on LLVIP demonstrate ECDM’s superiority over existing state-of-the-art approaches in terms of image generation quality. The pseudo thermal images generated by ECDM also help to boost the performance of various thermal object detectors by up to 7.1 mAP.
Data Generation Scheme for Thermal Modality with Edge-Guided Adversarial Conditional Diffusion Model
[ "Guoqing Zhu", "Honghu Pan", "Qiang Wang", "Chao Tian", "Chao Yang", "Zhenyu He" ]
Conference
poster
2408.03748
[ "https://github.com/lengmo1996/ECDM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=GQkPMFUWVf
@inproceedings{ li2024unveiling, title={Unveiling Structural Memorization: Structural Membership Inference Attack for Text-to-Image Diffusion Models}, author={Qiao Li and Xiaomeng Fu and Xi Wang and Jin Liu and Xingyu Gao and Jiao Dai and Jizhong Han}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=GQkPMFUWVf} }
With the rapid advancements of large-scale text-to-image diffusion models, various practical applications have emerged, bringing significant convenience to society. However, model developers may misuse the unauthorized data to train diffusion models. These data are at risk of being memorized by the models, thus potentially violating citizens' privacy rights. Therefore, in order to judge whether a specific image is utilized as a member of a model's training set, Membership Inference Attack (MIA) is proposed to serve as a tool for privacy protection. Current MIA methods predominantly utilize pixel-wise comparisons as distinguishing clues, considering the pixel-level memorization characteristic of diffusion models. However, it is practically impossible for text-to-image models to memorize all the pixel-level information in massive training sets. Therefore, we move to the more advanced structure-level memorization. Observations on the diffusion process show that the structures of members are better preserved compared to those of nonmembers, indicating that diffusion models possess the capability to remember the structures of member images from training sets. Drawing on these insights, we propose a simple yet effective MIA method tailored for text-to-image diffusion models. Extensive experimental results validate the efficacy of our approach. Compared to current pixel-level baselines, our approach not only achieves state-of-the-art performance but also demonstrates remarkable robustness against various distortions.
Unveiling Structural Memorization: Structural Membership Inference Attack for Text-to-Image Diffusion Models
[ "Qiao Li", "Xiaomeng Fu", "Xi Wang", "Jin Liu", "Xingyu Gao", "Jiao Dai", "Jizhong Han" ]
Conference
poster
2407.13252
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=GMlsODrCqB
@inproceedings{ lin2024mitigating, title={Mitigating Sample Selection Bias with Robust Domain Adaption in Multimedia Recommendation}, author={Jiaye Lin and Qing Li and Guorui Xie and Zhongxu Guan and Yong Jiang and Ting Xu and Zhong Zhang and Peilin Zhao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=GMlsODrCqB} }
Industrial multimedia recommendation systems extensively utilize cascade architectures to deliver personalized content for users, generally consisting of multiple stages like retrieval and ranking. However, retrieval models have long suffered from Sample Selection Bias (SSB) due to the distribution discrepancy between the exposed items used for model training and the candidates (almost unexposed) during inference, affecting recommendation performance. Traditional methods utilize retrieval candidates as augmented training data, indiscriminately treating unexposed data as negative samples, which leads to inaccuracies and noise. Some efforts rely on unbiased datasets, while they are costly to collect and insufficient for industrial models. In this paper, we propose a debiasing framework named DAMCAR, which introduces Domain Adaptation to mitigate SSB in Multimedia CAscade Recommendation systems. Firstly, we sample hard-to-distinguish samples from unexposed data to serve as the target domain, optimizing data quality and resource utilization. Secondly, adversarial domain adaptation is employed to generate pseudo-labels for each sample. To enhance robustness, we utilize Exponential Moving Average (EMA) to create a teacher model that supervises the generation of pseudo-labels via self-distillation. Finally, we obtain a retrieval model that maintains stable performance during inference through a hybrid training mechanism. We conduct offline experiments on two real-world datasets and deploy our approach in the retrieval model of a multimedia video recommendation system for online A/B testing. Comprehensive experimental results demonstrate the effectiveness of DAMCAR in practical applications.
Mitigating Sample Selection Bias with Robust Domain Adaption in Multimedia Recommendation
[ "Jiaye Lin", "Qing Li", "Guorui Xie", "Zhongxu Guan", "Yong Jiang", "Ting Xu", "Zhong Zhang", "Peilin Zhao" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=GIw7pmMPPX
@inproceedings{ ye2024relscene, title={RelScene: A Benchmark and baseline for Spatial Relations in text-driven 3D Scene Generation}, author={Zhaoda Ye and Xinhan Zheng and Yang Liu and Yuxin Peng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=GIw7pmMPPX} }
Text-driven 3D indoor scene generation aims to automatically generate and arrange the objects, which form a 3D scene that accurately captures the semantics detailed in the given text description. Recent works have shown the potential to generate 3D scenes guided by specific object categories and room layouts but lack a robust mechanism to maintain consistent spatial relationships in alignment with the provided text description during the 3D scene generation. Besides, the annotations of the object and relationships of the 3D scenes are usually time- and cost-consuming, which are not easily obtained for the model training. Thus, in this paper, we conduct a dataset and benchmark for assessing spatial relations in text-driven 3D scene generation, which contains a comprehensive collection of 3D scenes, including textual descriptions, annotating object spatial relations, and providing both template and free-form natural language descriptions. We also provide a pseudo description feature generation method to address the 3D scenes without language annotations. We design an aligned latent space for spatial relation in 3D scenes and text description, in which we can sample the features according to the spatial relation for the few-shot learning. We also propose new metrics to investigate the ability of the approach to generate correct spatial relationships among objects.
RelScene: A Benchmark and baseline for Spatial Relations in text-driven 3D Scene Generation
[ "Zhaoda Ye", "Xinhan Zheng", "Yang Liu", "Yuxin Peng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=GFU5s42jkr
@inproceedings{ he2024towards, title={Towards Stricter Black-box Integrity Verification of Deep Neural Network Models}, author={Chaoxiang He and Xiaofan Bai and Xiaojing Ma and Bin Benjamin Zhu and Pingyi Hu and Jiayun Fu and Hai Jin and Dongmei Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=GFU5s42jkr} }
Cloud-based machine learning services are attractive but expose a cloud-deployed DNN model to the risk of tampering. Black-box integrity verification (BIV) enables the owner or end-users to ascertain whether a cloud-deployed DNN model has been tampered with via returned responses of only top-1 labels. Fingerprinting generates fingerprint samples to query the model to achieve BIV of the model with no impact on the model's accuracy. In this paper, we introduce BIVBench, the first benchmark for BIV of DNN models, encompassing 16 types of practical modifications covering typical tampering scenarios. We reveal that existing fingerprinting methods, which focus on a limited range of tampering types, lack sensitivity in detecting subtle, yet common and potentially severe, tampering effectively. To fill this gap, we propose MiSentry (Model integrity Sentry), a novel fingerprinting method that strategically incorporates only a few crucial subtly tampered models into a model zoo, leverages meta-learning, and maximizes the divergence of the output predictions between the untampered targeted model and those models in the model zoo to generate highly sensitive, generalizable, and effective fingerprint samples. Extensive evaluations using BIVBench demonstrate that MiSentry substantially outperforms existing state-of-the-art fingerprinting methods, particularly in detecting subtle tampering.
Towards Stricter Black-box Integrity Verification of Deep Neural Network Models
[ "Chaoxiang He", "Xiaofan Bai", "Xiaojing Ma", "Bin Benjamin Zhu", "Pingyi Hu", "Jiayun Fu", "Hai Jin", "Dongmei Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=GF8NGtK0lK
@inproceedings{ yan2024lowrank, title={Low-rank Prompt Interaction for Continual Vision-Language Retrieval}, author={Weicai Yan and Ye Wang and Wang Lin and Zirun Guo and Zhou Zhao and Tao Jin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=GF8NGtK0lK} }
Research on continual learning in multi-modal tasks has been receiving increasing attention. However, most existing work overlooks the explicit cross-modal and cross-task interaction. In this paper, we innovatively propose the Low-rank Prompt Interaction (LPI) to address this general problem of multi-modal understanding, which considers both cross-modal interaction and cross-task interaction. Specifically, as for the former, we employ multi-modal correlation modules for corresponding Transformer layers. Considering that the training parameters scale to the number of layers and tasks, we propose Low-rank Interaction-augmented Decomposition to avoid memory explosion, while enhancing the cross-modal association through sharing and separating common-specific low-rank factors. In addition, due to the multi-modal semantic differences carried by the low-rank initialization, we adopt hierarchical low-rank contrastive learning to ensure training robustness. As for the latter, we initially employ visual analysis and identify that different tasks have clear distinctions in terms of proximity. Therefore, we introduce explicit task contrastive constraints in the prompt learning process based on task semantic distance. Experiments on two retrieval tasks show performance improvements with the introduction of a minimal number of parameters, demonstrating the effectiveness of our method.
Low-rank Prompt Interaction for Continual Vision-Language Retrieval
[ "Weicai Yan", "Ye Wang", "Wang Lin", "Zirun Guo", "Zhou Zhao", "Tao Jin" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=G4jDjEbmMg
@inproceedings{ chen2024simpliguard, title={SimpliGuard: Robust Mesh Simplification In the Wild}, author={Peibin Chen and Xijin Zhang and Daniel Kang Du}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=G4jDjEbmMg} }
Polygonal meshes are widely used to represent complex geometries. However, the increasing complexity of models often leads to large meshes with millions of triangles, raising significant challenges for storage, transmission, and computation. Mesh simplification, a process of reducing the number of triangles in a mesh while preserving its overall shape and important features, has emerged as an indispensable technique to address these challenges. In this work, we focus on the problem of obtaining a visually consistent ultra-low-polygon mesh for complex meshes. Unlike previous methods, we design a robust simplification framework, SimpliGuard, to handle any meshes in the wild. Firstly, a reconstruction module is used to construct a low-polygon mesh with a similar shape but a manifold topology. Then, a texture initialization module is employed to quickly initialize the entire texture map. After that, a differentiable rendering module is utilized to optimize the overall structure and texture details, ensuring high-quality results. For meshes with skeletons, the correctness of motion can be preserved with our designed motion post-processing module. Experimental results demonstrate that SimpliGuard significantly outperforms previous methods and various featured software, including Blender and Simplygon.
SimpliGuard: Robust Mesh Simplification In the Wild
[ "Peibin Chen", "Xijin Zhang", "Daniel Kang Du" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=G3MrTYNMtS
@inproceedings{ gao2024multiscale, title={Multi-Scale and Detail-Enhanced Segment Anything Model for Salient Object Detection}, author={Shixuan Gao and Pingping Zhang and Tianyu Yan and Huchuan Lu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=G3MrTYNMtS} }
Salient Object Detection (SOD) aims to identify and segment the most prominent objects in images. Existing methods on SOD utilize various Transformer-based models for feature extraction. However, due to the scale of training datasets and training methods, these Transformer-based models still lack performance and generalization in segmentation. Segment Anything Model (SAM) is trained on a large-scale segmentation dataset, which gives it strong generalization and segmentation capabilities. Nonetheless, SAM requires accurate prompts of target objects, which is unavailable in SOD. Additionally, SAM lacks the utilization of multi-scale and multi-layer information, as well as the incorporation of fine-grained details. In order to apply SAM to SOD, and address its shortcomings, we propose a Multi-scale and Detail-enhanced SAM (MDSAM). Specifically, we introduce a Lightweight Multi-scale Adapter (LMSA), which allows SAM to learn multi-scale information with few trainable parameters. Moreover, we propose a Multi-Layer Fusion Block (MLFB) to comprehensively utilize the multi-layer information from the SAM's encoder. Finally, we propose a Detail Enhancement Module (DEM) to incorporate SAM with fine-grained details. Experimental results demonstrate the superior performance of our model on multiple SOD datasets and its strong generalization to other segmentation tasks. The source code will be publicly available.
Multi-Scale and Detail-Enhanced Segment Anything Model for Salient Object Detection
[ "Shixuan Gao", "Pingping Zhang", "Tianyu Yan", "Huchuan Lu" ]
Conference
poster
2408.04326
[ "https://github.com/bellybeauty/mdsam" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=G1tsqarGAw
@inproceedings{ ge2024worldgpt, title={World{GPT}: Empowering {LLM} as Multimodal World Model}, author={Zhiqi Ge and Hongzhe Huang and Mingze Zhou and Juncheng Li and Guoming Wang and Siliang Tang and Yueting Zhuang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=G1tsqarGAw} }
World models are progressively being employed across diverse fields, extending from basic environment simulation to complex scenario construction. However, existing models are mainly trained on domain-specific states and actions, and confined to single-modality state representations. In this paper, We introduce **WorldGPT**, a generalist world model built upon Multimodal Large Language Model (MLLM). WorldGPT acquires an understanding of world dynamics through analyzing millions of videos across various domains. To further enhance WorldGPT's capability in specialized scenarios and long-term tasks, we have integrated it with a novel cognitive architecture that combines memory offloading, knowledge retrieval, and context reflection. As for evaluation, we build **WorldNet**, a multimodal state transition prediction benchmark encompassing varied real-life scenarios. Conducting evaluations on WorldNet directly demonstrates WorldGPT's capability to accurately model state transition patterns, affirming its effectiveness in understanding and predicting the dynamics of complex scenarios. We further explore WorldGPT's emerging potential in serving as a world simulator, helping multimodal agents generalize to unfamiliar domains through efficiently synthesising multimodal instruction instances which are proved to be as reliable as authentic data for fine-tuning purposes. The project is available on the [anonymous website](https://anonymous.4open.science/r/WorldGPT-C3B1).
WorldGPT: Empowering LLM as Multimodal World Model
[ "Zhiqi Ge", "Hongzhe Huang", "Mingze Zhou", "Juncheng Li", "Guoming Wang", "Siliang Tang", "Yueting Zhuang" ]
Conference
oral
2404.18202
[ "https://github.com/dcdmllm/worldgpt" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Fz6FxJ4gJr
@inproceedings{ tian2024qvd, title={{QVD}: Post-training Quantization for Video Diffusion Models}, author={Shilong Tian and Hong Chen and Chengtao Lv and Yu Liu and Jinyang Guo and Xianglong Liu and Shengxi Li and Hao Yang and Tao Xie}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Fz6FxJ4gJr} }
Recently, video diffusion models (VDMs) have garnered significant attention due to their notable advancements in generating coherent and realistic video content. However, processing multiple frame features concurrently, coupled with the considerable model size, results in high latency and extensive memory consumption, hindering their broader application. Post-training quantization (PTQ) is an effective technique to reduce memory footprint and improve computational efficiency. Unlike image diffusion, we observe that the temporal features, which are integrated into all frame features, exhibit pronounced skewness. Furthermore, we investigate significant inter-channel disparities and asymmetries in the activation of video diffusion models, resulting in low coverage of quantization levels by individual channels and increasing the challenge of quantization. To address these issues, we introduce the first PTQ strategy tailored for video diffusion models, dubbed QVD. Specifically, we propose the High Temporal Discriminability Quantization (HTDQ) method, designed for temporal features, which retains the high discriminability of quantized features, providing precise temporal guidance for all video frames. In addition, we present the Scattered Channel Range Integration (SCRI) method which aims to improve the coverage of quantization levels across individual channels. Experimental validations across various models, datasets, and bit-width settings demonstrate the effectiveness of our QVD in terms of diverse metrics. In particular, we achieve near-lossless performance degradation on W8A8, outperforming the current methods by 205.12 in FVD.
QVD: Post-training Quantization for Video Diffusion Models
[ "Shilong Tian", "Hong Chen", "Chengtao Lv", "Yu Liu", "Jinyang Guo", "Xianglong Liu", "Shengxi Li", "Hao Yang", "Tao Xie" ]
Conference
poster
2407.11585
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Fz4MkNcXl6
@inproceedings{ wu2024dgres, title={3D-{GRES}: Generalized 3D Referring Expression Segmentation}, author={Changli Wu and Yihang Liu and Yiwei Ma and Haowei Wang and Gen Luo and Jiayi Ji and Henghui Ding and Xiaoshuai Sun and Rongrong Ji}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Fz4MkNcXl6} }
3D Referring Expression Segmentation (3D-RES) is dedicated to segmenting a specific instance within a 3D space based on a natural language description. However, current approaches are limited to segmenting a single target, restricting the versatility of the task. To overcome this limitation, we introduce Generalized 3D Referring Expression Segmentation (3D-GRES), which extends the capability to segment any number of instances based on natural language instructions. In addressing this broader task, we propose the Multi-Query Decoupled Interaction Network (MDIN), designed to break down multi-object segmentation tasks into simpler, individual segmentations. MDIN comprises two fundamental components: Text-driven Sparse Queries (TSQ) and Multi-object Decoupling Optimization (MDO). TSQ generates sparse point cloud features distributed over key targets as the initialization for queries. Meanwhile, MDO is tasked with assigning each target in multi-object scenarios to different queries while maintaining their semantic consistency. To adapt to this new task, we build a new dataset, namely Multi3DRes. Our comprehensive evaluations on this dataset demonstrate substantial enhancements over existing models, thus charting a new path for intricate multi-object 3D scene comprehension. The benchmark and code are available at https://github.com/sosppxo/MDIN.
3D-GRES: Generalized 3D Referring Expression Segmentation
[ "Changli Wu", "Yihang Liu", "Yiwei Ma", "Haowei Wang", "Gen Luo", "Jiayi Ji", "Henghui Ding", "Xiaoshuai Sun", "Rongrong Ji" ]
Conference
oral
2407.20664
[ "https://github.com/sosppxo/MDIN" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=FxcoztYv98
@inproceedings{ zhou2024foreground, title={Foreground Harmonization and Shadow Generation for Composite Image}, author={Jing Zhou and Ziqi Yu and Zhongyun Bao and Gang Fu and Weilei He and Chao Liang and Chunxia Xiao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=FxcoztYv98} }
We propose a method for light and shadow editing of outdoor disharmonious composite images, including foreground harmonization and cast shadow generation. Most existing work can only perform foreground appearance editing tasks or only focus on shadow generation. In fact, lighting not only affects the brightness and color of objects, but also produces corresponding cast shadows. In recent years, diffusion models have demonstrated their strong generative capabilities, and due to their iterative denoising properties, they have a significant advantage in image restoration tasks. But it fails to preserve content structure of image. In this purpose, we propose an effective model to tackle the problem of foreground light-shadow editing. Specifically, we use a coarse shadow prediction module (SP) to generate coarse shadows for foreground objects. Then, we use the predicted results as prior knowledge to guide the generation of harmony diffusion model. In this process, the primary task is to learn lighting variation to harmonize foreground regions. The secondary task is to generate high-quality cast shadow containing more details. Considering that existing datasets do not support the dual tasks of image harmonization and shadow generation, we construct a real outdoor dataset, IH-SG, covering various lighting conditions. Extensive experiments conducted on existing benchmark datasets and the IH-SG dataset demonstrate the superiority of our method.
Foreground Harmonization and Shadow Generation for Composite Image
[ "Jing Zhou", "Ziqi Yu", "Zhongyun Bao", "Gang Fu", "Weilei He", "Chao Liang", "Chunxia Xiao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=FwlYlM6nLj
@inproceedings{ ma2024bidirectional, title={Bi-directional Task-Guided Network for Few-Shot Fine-Grained Image Classification}, author={Zhen-Xiang Ma and Zhen-Duo Chen and Li-Jun Zhao and Zi-Chao Zhang and Tai Zheng and Xin Luo and Xin-Shun Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=FwlYlM6nLj} }
In recent years, the Few-Shot Fine-Grained Image Classification (FS-FGIC) problem has gained widespread attention. A number of effective methods have been proposed that focus on extracting discriminative information within high-level features in a single episode/task. However, this is insufficient for addressing the cross-task challenges of FS-FGIC, which is represented in two aspects. On the one hand, from the perspective of the Fine-Grained Image Classification (FGIC) task, there is a need to supplement the model with mid-level features containing rich fine-grained information. On the other hand, from the perspective of the Few-Shot Learning (FSL) task, explicit modeling of cross-task general knowledge is required. In this paper, we propose a novel Bi-directional Task-Guided Network (BTG-Net) to tackle these issues. Specifically, from the FGIC task perspective, we design the Semantic-Guided Noise Filtering (SGNF) module to filter noise on mid-level features rich in detailed information. Further, from the FSL task perspective, the General Knowledge Prompt Modeling (GKPM) module is proposed to retain the cross-task general knowledge by utilizing the prompting mechanism, thereby enhancing the model's generalization performance on novel classes. We have conducted extensive experiments on five fine-grained benchmark datasets, and the results demonstrate that BTG-Net outperforms state-of-the-art methods comprehensively.
Bi-directional Task-Guided Network for Few-Shot Fine-Grained Image Classification
[ "Zhen-Xiang Ma", "Zhen-Duo Chen", "Li-Jun Zhao", "Zi-Chao Zhang", "Tai Zheng", "Xin Luo", "Xin-Shun Xu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Fv2nCkaH2C
@inproceedings{ duan2024blind, title={Blind Video Bit-Depth Expansion}, author={Panjun Duan and Yang Zhao and Yuan Chen and Wei Jia and Zhao Zhang and Ronggang Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Fv2nCkaH2C} }
With the rapid development of high-bit-depth display devices, bit-depth expansion (BDE) algorithms that extend low-bit-depth images to high-bit-depth images have received increasing attention. Due to the sensitivity of bit-depth distortions to tiny numerical changes in the least significant bits, the nuanced degradation differences in the training process may lead to varying degradation data distributions, causing the trained models to overfit specific types of degradations. This paper focuses on the problem of blind video BDE, proposing a degradation prediction and embedding framework, and designing a video BDE network based on a recurrent structure and dual-frame alignment fusion. Experimental results demonstrate that the proposed model can outperform some state-of-the-art (SOTA) models in terms of banding artifact removal and color correction, avoiding overfitting to specific degradations and obtaining better generalization ability across multiple datasets. The proposed degradation model and source codes will be open-sourced.
Blind Video Bit-Depth Expansion
[ "Panjun Duan", "Yang Zhao", "Yuan Chen", "Wei Jia", "Zhao Zhang", "Ronggang Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=FfGKepNa1J
@inproceedings{ tan2024highly, title={Highly Efficient No-reference 4K Video Quality Assessment with Full-Pixel Covering Sampling and Training Strategy}, author={Xiaoheng Tan and Jiabin Zhang and Yuhui Quan and Jing Li and Yajing Wu and Zilin Bian}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=FfGKepNa1J} }
Deep Video Quality Assessment (VQA) methods have shown impressive high-performance capabilities. Notably, no-reference (NR) VQA methods play a vital role in situations where obtaining reference videos is restricted or not feasible. Nevertheless, as more streaming videos are being created in ultra-high definition (e.g., 4K) to enrich viewers' experiences, the current deep VQA methods face unacceptable computational costs. Furthermore, the resizing, cropping, and local sampling techniques employed in these methods can compromise the details and content of original 4K videos, thereby negatively impacting quality assessment. In this paper, we propose a highly efficient and novel NR 4K VQA technology. Specifically, first, a novel data sampling and training strategy is proposed to tackle the problem of excessive resolution. This strategy allows the VQA Swin Transformer-based model to effectively train and make inferences using the full data of 4K videos on standard consumer-grade GPUs without compromising content or details. Second, a weighting and scoring scheme is developed to mimic the human subjective perception mode, which is achieved by considering the distinct impact of each sub-region within a 4K frame on the overall perception. Third, we incorporate the frequency domain information of video frames to better capture the details that affect video quality, consequently further improving the model's generalizability. To our knowledge, this is the first technology for the NR 4K VQA task. Thorough empirical studies demonstrate it not only significantly outperforms existing methods on a specialized 4K VQA dataset but also achieves state-of-the-art performance across multiple open-source NR video quality datasets.
Highly Efficient No-reference 4K Video Quality Assessment with Full-Pixel Covering Sampling and Training Strategy
[ "Xiaoheng Tan", "Jiabin Zhang", "Yuhui Quan", "Jing Li", "Yajing Wu", "Zilin Bian" ]
Conference
poster
2407.20766
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=FZCOsdSAKW
@inproceedings{ he2024heterogeneous, title={Heterogeneous Graph Guided Contrastive Learning for Spatially Resolved Transcriptomics Data}, author={Xiao He and Chang Tang and Xinwang Liu and Chuankun Li and Shan An and Zhenglai Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=FZCOsdSAKW} }
Spatial transcriptomics provides revolutionary insights into cellular interactions and disease development mechanisms by combining high-throughput gene sequencing and spatially resolved imaging technologies to analyze genes naturally associated with spatially variable tissue genes. However, existing methods typically map aggregated multi-view features into a unified representation, ignoring the heterogeneity and view independence of genes and spatial information. To this end, we construct a heterogeneous Graph guided Contrastive Learning (stGCL) for aggregating spatial transcriptomics data. The method is guided by the inherent heterogeneity of cellular molecules by dynamically coordinating triple-level node attributes through comparative learning loss distributed across view domains, thus maintaining view independence during the aggregation process. In addition, we introduce a cross-view hierarchical feature alignment module employing a parallel approach to decouple spatial and genetic views on molecular structures while aggregating multi-view features according to information theory, thereby enhancing the integrity of inter- and intra-views. Rigorous experiments demonstrate that stGCL outperforms existing methods in various tasks and related downstream applications. \end{abstract}
Heterogeneous Graph Guided Contrastive Learning for Spatially Resolved Transcriptomics Data
[ "Xiao He", "Chang Tang", "Xinwang Liu", "Chuankun Li", "Shan An", "Zhenglai Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=FUnM3xTKfg
@inproceedings{ wang2024multimodal, title={Multimodal {LLM} Enhanced Cross-lingual Cross-modal Retrieval}, author={Yabing Wang and Le Wang and Qiang Zhou and zhibin wang and Hao Li and Gang Hua and Wei Tang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=FUnM3xTKfg} }
Cross-lingual cross-modal retrieval aims to retrieve visually relevant content based on non-English queries, without relying on human-labeled cross-modal data pairs during training. One popular approach involves utilizing machine translation (MT) to create pseudo-parallel data pairs, establishing correspondence between visual and non-English textual data. However, aligning their representations poses challenges due to the significant semantic gap between vision and text, as well as the lower quality of non-English representations caused by pre-trained encoders and data noise. To overcome these challenges, we propose LECCR, a novel solution that incorporates the multi-modal large language model (MLLM) to improve the alignment between visual and non-English representations. Specifically, we first employ MLLM to generate detailed visual content descriptions and aggregate them into multi-view semantic slots that encapsulate different semantics. Then, we take these semantic slots as internal features and leverage them to interact with the visual features. By doing so, we enhance the semantic information within the visual features, narrowing the semantic gap between modalities and generating local visual semantics for subsequent multi-level matching. Additionally, to further enhance the alignment between visual and non-English features, we introduce softened matching under English guidance. This approach provides more comprehensive and reliable inter-modal correspondences between visual and non-English features. Extensive experiments on two cross-lingual image-text retrieval benchmarks, Multi30K and MSCOCO, as well as two cross-lingual video-text retrieval benchmarks, VATEX and MSR-VTT-CN, demonstrate the effectiveness of our proposed method.
Multimodal LLM Enhanced Cross-lingual Cross-modal Retrieval
[ "Yabing Wang", "Le Wang", "Qiang Zhou", "zhibin wang", "Hao Li", "Gang Hua", "Wei Tang" ]
Conference
poster
2409.19961
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=FUf0bNCw50
@inproceedings{ xie2024advancing, title={Advancing Multimodal Large Language Models with Quantization-Aware Scale Learning for Efficient Adaptation}, author={JingJing Xie and Yuxin Zhang and Mingbao Lin and Liujuan Cao and Rongrong Ji}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=FUf0bNCw50} }
This paper presents the first study to explore the potential of parameter quantization for multimodal large language models to alleviate the significant resource constraint encountered during vision-language instruction tuning. We introduce a Quantization-aware Scale LeArning method based on multimodal Warmup, termed QSLAW. This method is grounded in two key innovations: (1) The learning of group-wise scale factors for quantized LLM weights to mitigate the quantization error arising from activation outliers and achieve more effective vision-language instruction tuning; (2) The implementation of a multimodal warmup that progressively integrates linguistic and multimodal training samples, thereby preventing overfitting of the quantized model to multimodal data while ensuring stable adaptation of multimodal large language models to downstream vision-language tasks. Extensive experiments demonstrate that models quantized by QSLAW perform on par with, or even surpass, their full-precision counterparts, while facilitating up to 1.4 times reduction in VL tuning time and GPU consumption.
Advancing Multimodal Large Language Models with Quantization-Aware Scale Learning for Efficient Adaptation
[ "JingJing Xie", "Yuxin Zhang", "Mingbao Lin", "Liujuan Cao", "Rongrong Ji" ]
Conference
poster
2408.03735
[ "https://github.com/xjjxmu/qslaw" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=FRUgSgnASr
@inproceedings{ zhou2024diffharmony, title={DiffHarmony++: Enhancing Image Harmonization with Harmony-{VAE} and Inverse Harmonization Model}, author={Pengfei Zhou and Fangxiang Feng and Guang Liu and Ruifan Li and Xiaojie Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=FRUgSgnASr} }
Latent diffusion model has demonstrated impressive efficacy in image generation and editing tasks. Recently, it has also promoted the advancement of image harmonization. However, methods involving latent diffusion model all face a common challenge: the severe image distortion introduced by the VAE component, while image harmonization is a low-level image processing task that relies on pixel-level evaluation metrics. In this paper, we propose Harmony-VAE, leveraging the input of the harmonization task itself to enhance the quality of decoded images. The input involving composite image contains the precise pixel level information, which can complement the correct foreground appearance and color information contained in denoised latents. Meanwhile, the inherent generative nature of diffusion models makes it naturally adapt to inverse image harmonization, i.e. generating synthetic composite images based on real images and foreground masks. We train an inverse harmonization diffusion model to perform data augmentation on two subsets of iHarmony4 and construct a new human harmonization dataset with prominent foreground objects. Extensive experiments demonstrate the effectiveness of our proposed Harmony-VAE and inverse harmonization model. The code, pretrained models and the new dataset will be made publicly available.
DiffHarmony++: Enhancing Image Harmonization with Harmony-VAE and Inverse Harmonization Model
[ "Pengfei Zhou", "Fangxiang Feng", "Guang Liu", "Ruifan Li", "Xiaojie Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=FOw5WiuxGl
@inproceedings{ wang2024autosfx, title={Auto{SFX}: Automatic Sound Effect Generation for Videos}, author={Yujia Wang and Zhongxu Wang and Hua Huang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=FOw5WiuxGl} }
Sound Effect (SFX) generation, primarily aims to automatically produce sound waves for sounding visual objects in images or videos. Rather than learning an automatic solution to this task, we aim to propose a much broader system, AutoSFX, significantly applicable and less time-consuming, \ie automating sound design for videos. Our key insight is that ensuring consistency between auditory and visual information, performing seamless transitions between sound clips, and harmoniously mixing sounds playing simultaneously, is crucial for creating a unified audiovisual experience. AutoSFX capitalizes on this concept by aggregating multimodal representations by cross-attention and leverages a diffusion model to generate sound with visual information embedded. AutoSFX also optimizes the generated sounds to render the entire soundtrack for the input video, leading to a more immersive and engaging multimedia experience. We have developed a user-friendly interface for AutoSFX enabling users to interactively engage in the SFX generation for their videos with particular needs. To validate the capability of our vision-to-sound generation, we conducted comprehensive experiments and analyses using the widely recognized VEGAS and VGGSound test sets, yielding promising results. We also conducted a user study to evaluate the performance of the optimized soundtrack and the usability of the interface. Overall, the results revealed that our AutoSFX provides a viable sound landscape solution for making attractive videos.
AutoSFX: Automatic Sound Effect Generation for Videos
[ "Yujia Wang", "Zhongxu Wang", "Hua Huang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=FNYMLAhOHW
@inproceedings{ yang2024domain, title={Domain Shared and Specific Prompt Learning for Incremental Monocular Depth Estimation}, author={Zhiwen Yang and Liang Li and Jiehua Zhang and Tingyu Wang and Yaoqi Sun and Chenggang Yan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=FNYMLAhOHW} }
Incremental monocular depth estimation aims to continuously learn from new domains while maintaining their performance on old domains. The catastrophic forgetting problem is the key challenge when the model adapts the dynamic scene variations. Previous methods usually address this forgetting problem by storing raw samples from the old domain, allowing the model to review the knowledge of the old domain. However, due to the concerns of data privacy and security, our objective is to tackle the incremental monocular depth estimation problem in more stringent scenarios without the need for replaying samples. In this paper, we attribute the cross-domain catastrophic forgetting to the domain distribution shifts and continuous variations of depth space. To this end, we propose Domain Shared and Specific Prompt Learning (DSSP) for incremental monocular depth estimation. In detail, to alleviate the domain distribution shift, complementary domain prompt are designed to learn the domain-shared and domain-specific knowledge which are optimized by the inter-domain alignment and intra-domain orthogonal loss. To mitigate the depth space variations, we first introduce a pre-trained model to generate the domain-shared depth space. Then, we design $S^2$-Adapter that quantizes depth space variations with scale&shift matrices and converts the domain-shared depth space to domain-specific depth space. Our method achieves state-of-the-art performance under various scenarios such as different depth ranges, virtual and real, different weather conditions, and the few-shot incremental learning setting on 12 datasets. We will release the source codes and pre-trained models.
Domain Shared and Specific Prompt Learning for Incremental Monocular Depth Estimation
[ "Zhiwen Yang", "Liang Li", "Jiehua Zhang", "Tingyu Wang", "Yaoqi Sun", "Chenggang Yan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=FHguB1EYYi
@inproceedings{ li2024flipm, title={{FLIP}-80M: 80 Million Visual-Linguistic Pairs for Facial Language-Image Pre-Training}, author={Yudong Li and Xianxu Hou and Dezhi Zheng and Linlin Shen and Zhe Zhao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=FHguB1EYYi} }
While significant progress has been made in multi-modal learning driven by large-scale image-text datasets, there is still a noticeable gap in the availability of such datasets within the facial domain. To facilitate and advance the field of facial representation learning, we present FLIP-80M, a large-scale visual-linguistic dataset comprising over 80 million face images paired with text descriptions. The construction of FLIP-80M utilizes large-scale publicly available image-text-pair dataset, filtering 5 billion samples from general domain, and incorporates with AI-Generated Content (AIGC) methods for quality management and data augmentation. The data creation process involves a mixed-method pipeline to filter face-related pairs from both visual and linguistic perspectives, including face detection, face caption classification, text de-noising, and AIGC augmentation. As a result, FLIP-80M stands as the largest face-text dataset to date. It shows exceptional data quality and demonstrates the potential to enhance the performance of face representation models. To assess the efficacy of our dataset, we use contrastive learning objective to train FLIP (Facial Language-Image Pretraining) and evaluate its representation capabilities across various downstream tasks. Experimental results reveal that our FLIP model achieves state-of-the-art results cross 10 different face analysis tasks like face parsing, face alignment, and face attribute classification. The dataset and models will be publicly available.
FLIP-80M: 80 Million Visual-Linguistic Pairs for Facial Language-Image Pre-Training
[ "Yudong Li", "Xianxu Hou", "Dezhi Zheng", "Linlin Shen", "Zhe Zhao" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=FFIh7vYgyx
@inproceedings{ xu2024rsnn, title={{RSNN}: Recurrent Spiking Neural Networks for Dynamic Spatial-Temporal Information Processing}, author={Qi Xu and Xuanye Fang and Yaxin Li and Jiangrong Shen and De Ma and Yi Xu and Gang Pan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=FFIh7vYgyx} }
Spiking Neural Networks (SNNs) have great advantages in discrete event data processing because of their binary digital computation form. However, due to the limitation of the current structures of SNNs, the original event data needs to be preprocessed to reduce the time calculation steps and information redundancy. The traditional methods of dividing data into frames lead to the loss of a large amount of time information. In this paper, we proposed an efficient Recurrent Spiking Neural Network (RSNN) to reduce the time domain information loss of original slice samples with the spiking based neural dynamics for processing the dynamic spatial-temporal information. By constructing the Recurrent Spiking Neural Network model, the recurrent structure was used to preprocess slices before it was further input into the spiking structure to enhance the time correlation between slices. In addition, in order to match the two-dimensional spatial structure of data sample frames efficiently, this paper adapts a variation of structures of the recurrent neural network, named Convolution LSTM (CONLSTM). Through experiments on event based datasets such as DVS128-Gesture and CIFAR10-DVS, we find that the proposed model could not only behave better than some other spiking based models but also save energy and power consumption which paves the way for practical applications of neuromorphic hardware.
RSNN: Recurrent Spiking Neural Networks for Dynamic Spatial-Temporal Information Processing
[ "Qi Xu", "Xuanye Fang", "Yaxin Li", "Jiangrong Shen", "De Ma", "Yi Xu", "Gang Pan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=F8w9Sx3f4O
@inproceedings{ li2024advancing, title={Advancing Multi-grained Alignment for Contrastive Language-Audio Pre-training}, author={Yiming Li and Zhifang Guo and Xiangdong Wang and Hong Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=F8w9Sx3f4O} }
Recent advances have been witnessed in audio-language joint learning, such as CLAP, that shows much success in multi-modal understanding tasks. These models usually aggregate uni-modal local representations, namely frame or word features, into global ones, on which the contrastive loss is employed to reach coarse-grained cross-modal alignment. However, frame-level correspondence with texts may be ignored by the above paradigm, making it ill-posed on explainability and fine-grained text-audio challenges (e.g., text-to-audio grounding) which may also undermine performances on coarse-grained tasks. In this work, we aim to improve both coarse- and fine-grained audio-language alignment in large-scale contrastive pre-training. To unify the granularity and latent distribution of two modalities, a shared codebook is adopted to represent multi-modal global features with common bases, and each internal codeword is regularized to encode modality-shared semantics, bridging the gap between frame and word features. Based on the above framework, a locality-aware block is involved to purify local patterns, and a hard-negative guided loss is devised to boost alignment effects. Extensive experiments on eleven zero-shot coarse- and fine-grained evaluation protocols suggest that our model not only surpasses the baseline CLAP significantly but also yields superior or competitive results compared to current SOTA works. The code and model will be released upon paper acceptance.
Advancing Multi-grained Alignment for Contrastive Language-Audio Pre-training
[ "Yiming Li", "Zhifang Guo", "Xiangdong Wang", "Hong Liu" ]
Conference
oral
2408.07919
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=F837Or33RT
@inproceedings{ li2024dr, title={Dr. {CLIP}: {CLIP}-Driven Universal Framework for Zero-Shot Sketch Image Retrieval}, author={Xue Li and YU Jiong and Ziyang Li and Hongchun Lu and Ruifeng Yuan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=F837Or33RT} }
The field of Zero-Shot Sketch-Based Image Retrieval (ZS-SBIR) is currently undergoing a paradigm shift, transitioning from specialized models designed for individual tasks to more general retrieval models capable of managing various specialized scenarios. Inspired by the impressive generalization ability of the Contrastive Language-Image Pretraining (CLIP) model, we propose a CLIP-driven universal framework (Dr. CLIP), which leverages prompt learning to guide the synergy between CLIP and ZS-SBIR. Specifically, Dr. CLIP is a multi-branch network based on the CLIP image encoder and text encoder, which can perfectly cover four variants of ZS-SBIR tasks (inter-category, intra-category, cross-datasets, and generalization). Moreover, we decompose the synergy into classification learning, metric learning, and ranking learning, as well as introduce three key components to enhance learning effectiveness. i ) a forgetting suppression idea is applied to prevent catastrophic forgetting and constrains the feature distribution of the new categories in classification learning. ii ) a domain balanced loss is proposed to address sample imbalance and establish effective cross-domain correlations in metric learning. iii ) a pair-relation strategy is introduced to capture relevance and ranking relationships between instances in ranking learning. Eventually, we reorganize and redivide three coarse-grained datasets and two fine-grained datasets to accommodate the training settings for four ZS-SBIR tasks. The comparison experiments confirmed our method surpassed the state-of-the-art (SOTA) methods by a significant margin (1.95%~19.14%, mAP), highlighting its generality and superiority.
Dr. CLIP: CLIP-Driven Universal Framework for Zero-Shot Sketch Image Retrieval
[ "Xue Li", "YU Jiong", "Ziyang Li", "Hongchun Lu", "Ruifeng Yuan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=F300EXl6hY
@inproceedings{ yuan2024dualcriterion, title={Dual-Criterion Quality Loss for Blind Image Quality Assessment}, author={Desen Yuan and Lei Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=F300EXl6hY} }
This paper introduces a novel approach to Image Quality Assessment (IQA) by presenting a new loss function, Dual-Criterion Quality (DCQ) Loss, which integrates the Mean Squared Error (MSE) framework with a Relative Perception Constraint (RPC). The RPC is comprised of two main components: the Quantitative Discrepancy Constraint (QDC) and the Qualitative Alignment Constraint (QAC). The QDC focuses on capturing the numerical relationships of relative differences by minimizing the mean squared error between the differences in predicted scores among samples within a batch size and the differences in Mean Opinion Scores (MOS). Meanwhile, the QAC aims to capture the ordinal relationships between these differences. This method is designed to closely align with human subjective assessments of image quality, which are frequently quantified using the MOS, and to enhance the interpretability and reliability of IQA. Unlike existing ranking methods that suffer from complex pipelines and the introduction of errors through the generation of pair-wise or ordering data, DCQ Loss provides a more straightforward and efficient approach. Moreover, the loss function outperforms current rank-based IQA methods in terms of convergence, stability, and the ability to emulate human perception of visual quality. The effectiveness of this approach is validated through extensive experiments on various mainstream datasets and IQA network architectures, demonstrating significant performance gains over traditional rank loss approaches and contributing to the ongoing development of IQA.
Dual-Criterion Quality Loss for Blind Image Quality Assessment
[ "Desen Yuan", "Lei Wang" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=F0xfirktyA
@inproceedings{ he2024refmaskd, title={RefMask3D: Language-Guided Transformer for 3D Referring Segmentation}, author={Shuting He and Henghui Ding}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=F0xfirktyA} }
3D referring segmentation is an emerging and challenging vision-language task that aims to segment the object described by a natural language expression in a point cloud scene. The key challenge behind this task is vision-language feature fusion and alignment. In this work, we propose RefMask3D to explore the comprehensive multi-modal feature interaction and understanding. First, we propose a Geometry-Enhanced Group-Word Attention to integrate language with geometrically coherent sub-clouds through cross-modal group-word attention, which effectively addresses the challenges posed by the sparse and irregular nature of point clouds. Then, we introduce a Linguistic Primitives Construction to produce semantic primitives representing distinct semantic attributes, which greatly enhance the vision-language understanding at the decoding stage. Furthermore, we introduce an Object Cluster Module that analyzes the interrelationships among linguistic primitives to consolidate their insights and pinpoint common characteristics, helping to capture holistic information and enhance the precision of target identification. The proposed RefMask3D achieves new state-of-the-art performance on 3D referring segmentation, 3D visual grounding, and also 2D referring image segmentation. Especially, RefMask3D outperforms previous state-of-the-art method by a large margin of 5.36% mIoU on the challenging ScanRefer dataset.
RefMask3D: Language-Guided Transformer for 3D Referring Segmentation
[ "Shuting He", "Henghui Ding" ]
Conference
poster
2407.18244
[ "https://github.com/heshuting555/refmask3d" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=F02jfSVPmR
@inproceedings{ bai2024fslquickboost, title={{FSL}-QuickBoost: Minimal-Cost Ensemble for Few-Shot Learning}, author={Yunwei Bai and Bill Cai and Ying Kiat Tan and Zangwei Zheng and Shiming Chen and Tsuhan Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=F02jfSVPmR} }
Few-shot learning (FSL) usually trains models on data from one set of classes, but tests them on data from a different set of classes, providing a few labeled support samples of the unseen classes as a reference for the trained model. Due to the lack of training data relevant to the target, there is usually high generalization error with respect to the test classes. Some existing methods attempt to address this generalization issue through ensemble. However, current ensemble-based FSL methods can be computationally expensive. In this work, we conduct empirical explorations and propose an ensemble method (namely QuickBoost), which is efficient and effective for improving the generalization of FSL. Specifically, QuickBoost includes an alternative-architecture pretrained encoder with a one-vs-all binary classifier (namely FSL-Forest) based on random forest algorithm, and is ensembled with the off-the-shelf FSL models via logit-level averaging. Extensive experiments on three benchmarks demonstrate that our method achieves state-of-the-art performance with good efficiency.
FSL-QuickBoost: Minimal-Cost Ensemble for Few-Shot Learning
[ "Yunwei Bai", "Bill Cai", "Ying Kiat Tan", "Zangwei Zheng", "Shiming Chen", "Tsuhan Chen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Eqy86ekyWk
@inproceedings{ yang2024enhancing, title={Enhancing Transformer-based Semantic Matching for Few-shot Learning through Weakly Contrastive Pre-training}, author={Wei Yang and Tengfei Huo and Zhiqiang Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Eqy86ekyWk} }
The task of semantic text matching focuses on measuring the semantic similarity between two distinct texts and is widely applied in search and ranking scenarios. In recent years, pre-trained models based on the Transformer architecture have demonstrated powerful semantic representation capabilities and have become the mainstream method for text representation. The pipeline of fine-tuning pre-trained language models on downstream semantic matching tasks has achieved promising results and widespread adoption. However, practical downstream scenarios often face severe challenges in terms of data quality and quantity. Ensuring high-quality and large quantities of samples is often difficult. Current research on enhancing pre-trained models for few-shot semantic text matching tasks is still not advanced enough. Therefore, this paper focuses on providing a general enhancement scheme for few-shot semantic text matching tasks. Specifically, we propose an Enhanced Transformer-based Semantic Matching method for few-shot learning through weakly contrastive pre-training, which is named as EBSIM. Firstly, considering the characteristics of semantic text matching tasks, we design a simple and cost-effective data augmentation method for constructing weakly supervised samples. Then, we design a contrastive learning objective based on alignment-aspect to achieve effective semantic matching by optimizing the bidirectional semantic perception between constructed texts. We conduct comprehensive experiments on five Chinese and English semantic text matching datasets using various Transformer-based pre-trained models. The experimental results confirm that our proposed method significantly improves the model's performance on semantic text matching tasks. Further ablation experiments and case studies validate the effectiveness of our approach. Our code and data will be made publicly available at a later stage.
Enhancing Transformer-based Semantic Matching for Few-shot Learning through Weakly Contrastive Pre-training
[ "Wei Yang", "Tengfei Huo", "Zhiqiang Liu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Ep8jSB23bm
@inproceedings{ frolov2024objblur, title={ObjBlur: A Curriculum Learning Approach With Progressive Object-Level Blurring for Improved Layout-to-Image Generation}, author={Stanislav Frolov and Brian Bernhard Moser and Sebastian Palacio and Andreas Dengel}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Ep8jSB23bm} }
We present ObjBlur, a novel curriculum learning approach to improve layout-to-image generation models, where the task is to produce realistic images from layouts composed of boxes and labels. Our method is based on progressive object-level blurring, which effectively stabilizes training and enhances the quality of generated images. This curriculum learning strategy systematically applies varying degrees of blurring to individual objects or the background during training, starting from strong blurring to progressively cleaner images. Our findings reveal that this approach yields significant performance improvements, stabilized training, smoother convergence, and reduced variance between multiple runs. Moreover, our technique demonstrates its versatility by being compatible with generative adversarial networks and diffusion models, underlining its applicability across various generative modeling paradigms. With ObjBlur, we reach new state-of-the-art results on the complex COCO and Visual Genome datasets.
ObjBlur: A Curriculum Learning Approach With Progressive Object-Level Blurring for Improved Layout-to-Image Generation
[ "Stanislav Frolov", "Brian Bernhard Moser", "Sebastian Palacio", "Andreas Dengel" ]
Conference
poster
2404.07564
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=EjjY5yJzQG
@inproceedings{ zhang2024document, title={Document Registration: Towards Automated Labeling of Pixel-Level Alignment Between Warped-Flat Documents}, author={Weiguang Zhang and Qiufeng Wang and Kaizhu Huang and Xiaowei Huang and Fengjun Guo and Xiaomeng Gu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=EjjY5yJzQG} }
Photographed documents are prevalent but often suffer from deformations like curves or folds, hindering readability. Consequently, document dewarping has been widely studied, however its performance is still not satisfied due to lack of real training samples with pixel-level annotation. To obtain the pixel-level labels, we leverage a document registration pipeline to automatically align warped-flat documents. Unlike general image registration works, registering documents poses unique challenges due to their severe deformations and fine-grained textures. In this paper, we introduce a coarse-to-fine framework including a coarse registration network (CRN) aiming to eliminate severe deformations then a fine registration network (FRN) focusing on fine-grained features. In addition, we utilize self-supervised learning to initialize our document registration model, where we propose a cross-reconstruction pre-training task on the pair of warped-flat documents. Extensive experiments show that we can achieve satisfied document registration performance, consequently obtaining a high-quality registered document dataset with pixel-level annotation. Without bells and whistles, we re-train two popular document dewarping models on our registered document dataset WarpDoc-R, and obtain superior performance with those using almost 100× scale of synthetic training data, verifying the label quality of our document registration method. The code and pixel-level labels will be released.
Document Registration: Towards Automated Labeling of Pixel-Level Alignment Between Warped-Flat Documents
[ "Weiguang Zhang", "Qiufeng Wang", "Kaizhu Huang", "Xiaowei Huang", "Fengjun Guo", "Xiaomeng Gu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Ee96nP8xOh
@inproceedings{ yang2024maximizing, title={Maximizing Feature Distribution Variance for Robust Neural Networks}, author={Hao Yang and Min Wang and zhengfei Yu and Zhi Zeng and Mingrui Lao and Yun Zhou}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Ee96nP8xOh} }
The security of Deep Neural Networks (DNNs) has proven to be critical for their applicabilities in real-world scenarios. However, DNNs are well-known to be vulnerable against adversarial attacks, such as adding artificially designed imperceptible magnitude perturbation to the benign input. Therefore, adversarial robustness is essential for DNNs to defend against malicious attacks. Stochastic Neural Networks (SNNs) have recently shown effective performance on enhancing adversarial robustness by injecting uncertainty into models. Nevertheless, existing SNNs are still limited for adversarial defense, as their insufficient representation capability from the fixed uncertainty. In this paper, to elevate feature representation capability of SNNs, we propose a novel yet practical stochastic neural network that maximizes feature distribution variance (MFDV-SNN). In addition, we provide theoretical insights to support the adversarial resistance of MFDV, which primarily derived from the stochastic noise we injected into DNNs. Our research demonstrates that by gradually increasing the level of stochastic noise in a DNN, the model naturally becomes more resistant to input perturbations. Since adversarial training is not required, MFDV-SNN does not compromise clean data accuracy and saves up to 7.5 times computation time. Extensive experiments on various attacks demonstrate that MFDV-SNN improves adversarial robustness significantly compared to other methods.
Maximizing Feature Distribution Variance for Robust Neural Networks
[ "Hao Yang", "Min Wang", "zhengfei Yu", "Zhi Zeng", "Mingrui Lao", "Yun Zhou" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Ebzo6s2xtv
@inproceedings{ han2024dunet, title={D\${\textasciicircum}3\$U-Net: Dual-Domain Collaborative Optimization Deep Unfolding Network for Image Compressive Sensing}, author={Kai Han and Jin Wang and Yunhui Shi and Nam Ling and Baocai Yin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Ebzo6s2xtv} }
Deep unfolding network (DUN) is a powerful technique for image compressive sensing that bridges the gap between optimization methods and deep networks. However, DUNs usually rely heavily on single-domain information, overlooking the inter-domain dependencies. Therefore, such DUNs often face the following challenges: 1) information loss due to the inefficient representation within a single domain, and 2) limited robustness due to the absence of inter-domain dependencies. To overcome these challenges, we propose a deep unfolding framework D$^3$U-Net that establishes a dual-domain collaborative optimization scheme. This framework introduces both visual representations from the image domain and multi-resolution analysis provided by the wavelet domain. Such dual-domain representations constrain the feasible region within the solution space more accurately. Specifically, we design a consistency-difference collaborative mechanism to capture inter-domain dependencies effectively. This mechanism not only enhances the fidelity of reconstruction but also enriches the depth and breadth of extracted features, improving the overall robustness and reconstruction quality. Moreover, we develop an inter-stage transmission pathway to minimize the information loss during transmission while broadcasting multi-scale features in a frequency-adaptive manner. Extensive experimental results on various benchmark datasets show the superior performance of our method.
D^3U-Net: Dual-Domain Collaborative Optimization Deep Unfolding Network for Image Compressive Sensing
[ "Kai Han", "Jin Wang", "Yunhui Shi", "Nam Ling", "Baocai Yin" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=EZHxCOnV7n
@inproceedings{ matsuhira2024investigating, title={Investigating Conceptual Blending of a Diffusion Model for Improving Nonword-to-Image Generation}, author={Chihaya Matsuhira and Marc A. Kastner and Takahiro Komamizu and Takatsugu Hirayama and Ichiro Ide}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=EZHxCOnV7n} }
Text-to-image diffusion models sometimes depict blended concepts in generated images. One promising use case of this effect would be the nonword-to-image generation task which attempts to generate images intuitively imaginable from a non-existing word (nonword). To realize nonword-to-image generation, an existing study focused on associating nonwords with similar-sounding words. Since each nonword can have multiple similar-sounding words, generating images containing their blended concepts would increase intuitiveness, facilitating creative activities and promoting computational psycholinguistics. Nevertheless, no existing study has quantitatively evaluated this effect in either diffusion models or the nonword-to-image generation paradigm. Therefore, this paper first analyzes the conceptual blending in one of the pretrained diffusion models called Stable Diffusion. The analysis reveals that a high percentage of generated images depict blended concepts when inputting an embedding interpolating between the text embeddings of two text prompts referring to different concepts. Next, this paper explores the best text embedding space conversion method of an existing nonword-to-image generation framework to ensure both the occurrence of conceptual blending and image generation quality. We compare the conventional direct prediction approach with the proposed method that combines $k$-nearest neighbor search and linear regression. Evaluation reveals that the enhanced accuracy of the embedding space conversion by the proposed method improves the image generation quality, while the emergence of conceptual blending could be attributed mainly to the specific dimensions of the high-dimensional text embedding space.
Investigating Conceptual Blending of a Diffusion Model for Improving Nonword-to-Image Generation
[ "Chihaya Matsuhira", "Marc A. Kastner", "Takahiro Komamizu", "Takatsugu Hirayama", "Ichiro Ide" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=EZ3mntWoL2
@inproceedings{ lin2024ftfer, title={{FTF}-{ER}: Feature-Topology Fusion-Based Experience Replay Method for Continual Graph Learning}, author={Changqing Lin and Jinhui Pang and Xiaoshuai Hao and Rong Yin and Zixuan Wang and Zhihui Zhang and Jinglin He and HUANG TAI SHENG}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=EZ3mntWoL2} }
Continual graph learning (CGL) is an important and challenging task that aims to extend static GNNs to dynamic task flow scenarios. As one of the mainstream CGL methods, the experience replay (ER) method receives widespread attention due to its superior performance. However, existing ER methods focus on identifying samples by feature significance or topological relevance, which limits their utilization of comprehensive graph data. In addition, the topology-based ER methods only consider local topological information and add neighboring nodes to the buffer, which ignores the global topological information and increases memory overhead. To bridge these gaps, we propose a novel method called Feature-Topology Fusion-based Experience Replay (FTF-ER) to effectively mitigate the catastrophic forgetting issue with enhanced efficiency. Specifically, from an overall perspective to maximize the utilization of the entire graph data, we propose a highly complementary approach including both feature and global topological information, which can significantly improve the effectiveness of the sampled nodes. Moreover, to further utilize global topological information, we propose Hodge Potential Score (HPS) as a novel module to calculate the topological importance of nodes. HPS derives a global node ranking via Hodge decomposition on graphs, providing more accurate global topological information compared to neighbor sampling. By excluding neighbor sampling, HPS significantly reduces buffer storage costs for acquiring topological information and simultaneously decreases training time. Compared with state-of-the-art methods, FTF-ER achieves a significant improvement of 3.6% in AA and 7.1% in AF on the OGB-Arxiv dataset, demonstrating its superior performance in the class-incremental learning setting.
FTF-ER: Feature-Topology Fusion-Based Experience Replay Method for Continual Graph Learning
[ "Changqing Lin", "Jinhui Pang", "Xiaoshuai Hao", "Rong Yin", "Zixuan Wang", "Zhihui Zhang", "Jinglin He", "HUANG TAI SHENG" ]
Conference
poster
2407.19429
[ "https://github.com/CyanML/FTF-ER" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ERuypCHYvX
@inproceedings{ ma2024equilibrated, title={Equilibrated Diffusion: Frequency-aware Textual Embedding for Equilibrated Image Customization}, author={Liyuan Ma and Xueji Fang and Guo-Jun Qi}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ERuypCHYvX} }
Image customization involves learning the subject from provided concept images and generating it within textual contexts, typically yielding alterations of attributes such as style or background. Prevailing methods primarily rely on fine-tuning technique, wherein a unified latent embedding is employed to characterize various concept attributes. However, the attribute entanglement renders customized result challenging to mitigate the influence of subject-irrelevant attributes (e.g., style and background). To overcome these issues, we propose Equilibrated Diffusion, an innovative method that achieves equilibrated image customization by decoupling entangled concept attributes from a frequency-aware perspective, thus harmonizing textual and visual consistency. Unlike conventional approaches that employ a shared latent embedding and tuning process to learn concept, our Equilibrated Diffusion draws inspiration from the correlation between high- and low-frequency components with image style and content, decomposing concept accordingly in the frequency domain. Through independently optimizing concept embeddings in the frequency domain, the denoising model not only enriches its comprehension of style attribute irrelevant to subject identity but also inherently augments its aptitude for accommodating novel stylized descriptions. Furthermore, by combining different frequency embeddings, our model retains the spatially original customization capability. We further design a diffusion process guided by subject masks to alleviate the influence of background attribute, thereby strengthening text alignment. To ensure subject-related information consistency, Residual Reference Attention (RRA) is incorporated into the denoising model of spatial attention computation, effectively preserving structural details. Experimental results demonstrate that Equilibrated Diffusion surpasses other competitors with better subject consistency while closely adhering to text descriptions, thus validating the superiority of our approach.
Equilibrated Diffusion: Frequency-aware Textual Embedding for Equilibrated Image Customization
[ "Liyuan Ma", "Xueji Fang", "Guo-Jun Qi" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ENMl53MpKm
@inproceedings{ lv2024rethinking, title={Rethinking the Effect of Uninformative Class Name in Prompt Learning}, author={Fengmao Lv and Changru Nie and Jianyang Zhang and Guowu Yang and Guosheng Lin and Xiao Wu and Tianrui Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ENMl53MpKm} }
Large pre-trained vision-language models like CLIP have shown amazing zero-shot recognition performance. To adapt pre-trained vision-language models to downstream tasks, recent studies have focused on the "learnable context + class name" paradigm, which learns continuous prompt contexts on downstream datasets. In practice, the learned prompt context tends to overfit the base categories and cannot generalize well to novel categories out of the training data. Recent works have also noticed this problem and have proposed several improvements. In this work, we draw a new insight based on empirical analysis, that is, uninformative class names lead to degraded base-to-novel generalization performance in prompt learning, which is usually overlooked by existing works. Under this motivation, we advocate to improve the base-to-novel generalization performance of prompt learning by enhancing the semantic richness of class names. We coin our approach as the Information Disengagement based Associative Prompt Learning (IDAPL) mechanism which considers the associative, meanwhile, decoupled learning of prompt context and class name embedding. IDAPL can effectively alleviate the phenomenon of learnable context overfitting to base classes, meanwhile, learning more informative semantic representation of base classes by fine-tuning the class name embedding, leading to improved performance on both base and novel classes. Experimental results on eleven widely used few-shot learning benchmarks clearly validate the effectiveness of our proposed approach. Code is available at https://github.com/tiggers23/IDAPL
Rethinking the Effect of Uninformative Class Name in Prompt Learning
[ "Fengmao Lv", "Changru Nie", "Jianyang Zhang", "Guowu Yang", "Guosheng Lin", "Xiao Wu", "Tianrui Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=EIhApjnzP5
@inproceedings{ zhu2024reproducing, title={Reproducing the Past: A Dataset for Benchmarking Inscription Restoration}, author={Shipeng Zhu and Hui Xue and Na Nie and Chenjie Zhu and Haiyue Liu and Pengfei Fang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=EIhApjnzP5} }
Inscriptions on ancient steles, as carriers of culture, encapsulate the humanistic thoughts and aesthetic values of our ancestors. However, these relics often deteriorate due to environmental and human factors, resulting in significant information loss. Since the advent of inscription rubbing technology over a millennium ago, archaeologists and epigraphers have devoted immense effort to manually restoring these cultural imprints, endeavoring to unlock the storied past within each rubbing. This paper approaches this challenge as a multi-modal task, aiming to establish a novel benchmark for the inscription restoration from rubbings. In doing so, we construct the Chinese Inscription Rubbing Image (CIRI) dataset, which includes a wide variety of real inscription rubbing images characterized by diverse calligraphy styles, intricate character structures, and complex degradation forms. Furthermore, we develop a synthesis approach to generate ``intact-degraded'' paired data, mirroring real-world degradation faithfully. On top of the datasets, we propose a baseline framework that achieves visual consistency and textual integrity through global and local diffusion-based restoration processes and explicit incorporation of domain knowledge. Comprehensive evaluations confirm the effectiveness of our pipeline, demonstrating significant improvements in visual presentation and textual integrity. The project is available at: https://github.com/blackprotoss/CIRI.
Reproducing the Past: A Dataset for Benchmarking Inscription Restoration
[ "Shipeng Zhu", "Hui Xue", "Na Nie", "Chenjie Zhu", "Haiyue Liu", "Pengfei Fang" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=DtZqzFYXgU
@inproceedings{ wang2024unil, title={UniL: Point Cloud Novelty Detection through Multimodal Pre-training}, author={Yuhan Wang and Mofei Song}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=DtZqzFYXgU} }
3D novelty detection plays a crucial role in various real-world applications, especially in safety-critical fields such as autonomous driving and intelligent surveillance systems. However, existing 3D novelty detection methods are constrained by the scarcity of 3D data, which may impede the model's ability to learn adequate representations, thereby impacting detection accuracy. To address this challenge, we propose a Unified Learning Framework (UniL) for facilitating novelty detection. During the pretraining phase, UniL assists the point cloud encoder in learning information from other modalities, aligning visual, textual, and 3D features within the same feature space. Additionally, we introduce a novel Multimodal Supervised Contrastive Loss (MSC Loss) to improve the model's ability to cluster samples from the same category in feature space by leveraging label information during pretraining. Furthermore, we propose a straightforward yet powerful scoring method, Depth Map Error (DME), which assesses the discrepancy between projected depth maps before and after point cloud reconstruction during novelty detection. Extensive experiments conducted on 3DOS have demonstrated the effectiveness of our approach, significantly enhancing the performance of the unsupervised VAE method in 3D novelty detection. The code will be made available.
UniL: Point Cloud Novelty Detection through Multimodal Pre-training
[ "Yuhan Wang", "Mofei Song" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=DntswNJ3RN
@inproceedings{ xiao2024pbic, title={P-BiC: Ultra-High-Definition Image Demoireing via Patch Bilateral Compensation}, author={Zeyu Xiao and Zhihe Lu and Xinchao Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=DntswNJ3RN} }
People are nowadays using smartphones to capture photos from multimedia platfroms. The presence of moir\'e patterns resulting from spectral aliasing can significantly degrade the visual quality of images, particularly in ultra-high-definition (UHD) images. However, existing demoir\'eing methods have mostly been designed for low-definition images, making them unsuitable for handling moir\'e patterns in UHD images due to their substantial memory requirements. In this paper, we propose a novel patch bilateral compensation network (P-BiC) for the demoir\'e pattern removal in UHD images, which is memory-efficient and prior-knowledge-based. Specifically, we divide the UHD images into small patches and perform patch-level demoir\'eing to maintain the low memory cost even for ultra-large image sizes. Moreover, a pivotal insight, namely that the green channel of an image remains relatively less affected by moir\'e patterns, while the tone information in moir\'e images is still well-retained despite color shifts, is directly harnessed for the purpose of bilateral compensation. The bilateral compensation is achieved by two key components in our P-BiC, i.e., a green-guided detail transfer (G$^2$DT) module that complements distorted features with the intact content, and a style-aware tone adjustment (STA) module for the color adjustment. We quantitatively and qualitatively evaluate the effectiveness of P-BiC with extensive experiments.
P-BiC: Ultra-High-Definition Image Demoireing via Patch Bilateral Compensation
[ "Zeyu Xiao", "Zhihe Lu", "Xinchao Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=DmSJakN9cr
@inproceedings{ yang2024multimodal, title={Multimodal Contextual Interactions of Entities: A Modality Circular Fusion Approach for Link Prediction}, author={Jing Yang and ShunDong Yang and Yuan Gao and JieMing Yang and Laurence Tianruo Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=DmSJakN9cr} }
Link prediction aims to infer missing valid triplets to complete knowledge graphs, with recent inclusion of multimodal information to enrich entity representations. Existing methods project multimodal information into a unified embedding space or learn modality-specific features separately for later integration. However, performance was limited in such studies due to neglecting the modalities compatibility and conflict semantic carried by entities in valid and invalid triplets. In this paper, we aim at modeling inter-entity modality interactions and thus propose a novel modality circular fusion approach (MoCi), which interweaves multimodal contextual of entities. Firstly, unlike most methods in this task that directly fuse modalities, we design a triplets-prompt modality contrastive pre-training to align modality semantics beforehand. Moreover, we propose a modality circular fusion model using a simple yet efficient multilinear transformation strategy. This allows explicit inter-entity modality interactions, distinguishing it from methods confined to fuse within individual entities. To the best of our knowledge, MoCi presents one of the pioneering frameworks that tailored to grasp inter-entity modality semantics for better link prediction. Extensive experiments on seven datasets demonstrate our model yields SOTA performance, confirming the efficacy of MoCi in modeling inter-entity modality interactions. Our code is released at https://github.com/MoCiGitHub/MoCi.
Multimodal Contextual Interactions of Entities: A Modality Circular Fusion Approach for Link Prediction
[ "Jing Yang", "ShunDong Yang", "Yuan Gao", "JieMing Yang", "Laurence Tianruo Yang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=DkiAOcGQHy
@inproceedings{ tan2024synopground, title={SynopGround: A Large-Scale Dataset for Multi-Paragraph Video Grounding from {TV} Dramas and Synopses}, author={Chaolei Tan and Zihang Lin and Junfu Pu and Zhongang Qi and Wei-Yi Pei and Zhi Qu and Yexin Wang and Ying Shan and Wei-Shi Zheng and Jian-Fang Hu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=DkiAOcGQHy} }
Video grounding is a fundamental problem in multimodal content understanding, aiming to localize specific natural language queries in an untrimmed video. However, current video grounding datasets merely focus on simple events and are either limited to shorter videos or brief sentences, which hinders the model from evolving toward stronger multimodal understanding capabilities. To address these limitations, we present a large-scale video grounding dataset named SynopGround, in which more than 2800 hours of videos are sourced from popular TV dramas and are paired with accurately localized human-written synopses. Each paragraph in the synopsis serves as a language query and is manually annotated with precise temporal boundaries in the long video. These paragraph queries are tightly correlated to each other and contain a wealth of abstract expressions summarizing video storylines and specific descriptions portraying event details, which enables the model to learn multimodal perception on more intricate concepts over longer context dependencies. Based on the dataset, we further introduce a more complex setting of video grounding dubbed Multi-Paragraph Video Grounding (MPVG), which takes as input multiple paragraphs and a long video for grounding each paragraph query to its temporal interval. In addition, we propose a novel Local-Global Multimodal Reasoner (LGMR) to explicitly model the local-global structures of long-term multimodal inputs for MPVG. Our method provides an effective baseline solution to the multi-paragraph video grounding problem. Extensive experiments verify the proposed model's effectiveness as well as its superiority in long-term multi-paragraph video grounding over prior state-of-the-arts. Dataset and code are publicly available. Project page: https://synopground.github.io/.
SynopGround: A Large-Scale Dataset for Multi-Paragraph Video Grounding from TV Dramas and Synopses
[ "Chaolei Tan", "Zihang Lin", "Junfu Pu", "Zhongang Qi", "Wei-Yi Pei", "Zhi Qu", "Yexin Wang", "Ying Shan", "Wei-Shi Zheng", "Jian-Fang Hu" ]
Conference
poster
2408.01669
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=DkPInDljfM
@inproceedings{ liu2024mvpbev, title={{MVP}bev: Multi-view Perspective Image Generation from {BEV} with Test-time Controllability and Generalizability}, author={Buyu Liu and Kai Wang and Yansong Liu and Jun Bao and Tingting Han and Jun Yu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=DkPInDljfM} }
This work aims to address the multi-view perspective RGB generation from text prompts given Bird-Eye-View(BEV) semantics. Unlike prior methods that neglect layout consistency, lack the ability to handle detailed text prompts, or incapable of generalizing to unseen view points, MVPbev simultaneously generates cross-view consistent images of different perspective views with a two-stage design, allowing object-level control and novel view generation at test-time. Specifically, MVPbev firstly projects given BEV semantics to perspective view with camera parameters, empowering model to generalize to unseen view points. Then we introduce a multi-view attention module where special initialization and de-nosing processes are introduced to explicitly enforce local consistency among overlapping views w.r.t. cross-view homography. Last but not the least, MVPbev further allows test-time instance-level controllabity by refining a pre-trained text-to-image diffusion model. Our extensive experiments on NuScenes demonstrate that our method is capable of generating high-resolution photorealistic images from text descriptions with thousands of training samples, surpassing the state-of-the-art methods under various evaluation metrics. We further demonstrate the advances of our method in terms of generalizability and controllability with the help of novel evalution metrics and comprehensive human analysis. Our code and model will be made available.
MVPbev: Multi-view Perspective Image Generation from BEV with Test-time Controllability and Generalizability
[ "Buyu Liu", "Kai Wang", "Yansong Liu", "Jun Bao", "Tingting Han", "Jun Yu" ]
Conference
poster
2407.19468
[ "https://github.com/kkaiwwana/mvpbev" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=DkNhsmd0RW
@inproceedings{ liu2024detecting, title={Detecting Multimodal Situations with Insufficient Context and Abstaining from Baseless Predictions}, author={Junzhang Liu and Zhecan Wang and Hammad Ayyubi and Haoxuan You and Chris Thomas and Rui Sun and Shih-Fu Chang and Kai-Wei Chang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=DkNhsmd0RW} }
Despite the widespread adoption of Vision-Language Understanding (VLU) benchmarks such as VQA v2, OKVQA, A-OKVQA, GQA, VCR, SWAG, and VisualCOMET, our analysis reveals a pervasive issue affecting their integrity: these benchmarks contain samples where answers rely on assumptions unsupported by the provided context. Training models on such data fosters biased learning and hallucinations as models tend to make similar unwarranted assumptions. To address this issue, we collect contextual data for each sample whenever available and train a context selection module to facilitate evidence-based model predictions. Strong improvements across multiple benchmarks demonstrate the effectiveness of our approach. Further, we develop a general-purpose Context-AwaRe Abstention (CARA) detector to identify samples lacking sufficient context and enhance model accuracy by abstaining from responding if the required context is absent. CARA exhibits generalization to new benchmarks it wasn't trained on, underscoring its utility for future VLU benchmarks in detecting or cleaning samples with inadequate context. Finally, we curate a Context Ambiguity and Sufficiency Evaluation (CASE) set to benchmark the performance of insufficient context detectors. Overall, our work represents a significant advancement in ensuring that vision-language models generate trustworthy and evidence-based outputs in complex real-world scenarios.
Detecting Multimodal Situations with Insufficient Context and Abstaining from Baseless Predictions
[ "Junzhang Liu", "Zhecan Wang", "Hammad Ayyubi", "Haoxuan You", "Chris Thomas", "Rui Sun", "Shih-Fu Chang", "Kai-Wei Chang" ]
Conference
poster
2405.11145
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Dk4LyZ654W
@inproceedings{ zhuang2024future, title={Future Motion Dynamic Modeling via Hybrid Supervision for Multi-Person Motion Prediction Uncertainty Reduction}, author={Yan Zhuang and Yanlu Cai and WEIZHONG ZHANG and Cheng Jin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Dk4LyZ654W} }
Multi-person motion prediction remains a challenging problem due to the intricate motion dynamics and complex interpersonal interactions, where uncertainty escalates rapidly across the forecasting horizon. Existing approaches always overlook the motion dynamic modeling among the prediction frames to reduce the uncertainty, but leave it entirely up to the deep neural networks, which lacks a dynamic inductive bias, leading to suboptimal performance. This paper addresses this limitation by proposing an effective multi-person motion prediction method named Hybrid Supervision Transformer (HSFormer), which formulates the dynamic modeling within the prediction horizon as a novel hybrid supervision task. To be precise, our method performs a rolling predicting process equipped with a hybrid supervision mechanism, which enforces the model to be able to predict the pose in the next frames based on the (typically error-contained) earlier predictions. Addition to the standard supervision loss, two self and auxiliary supervision mechanisms, which minimize the distance of the predictions with error-contained inputs and the predictions with error-free inputs (ground truth) and guide the model to make accurate predictions based on the ground truth, are introduced to improve the robustness of our model to the input deviation in inference and stabilize the training process, respectively. The optimization techniques, such as stop-gradient, are extended to our model to improve the training efficiency.
Future Motion Dynamic Modeling via Hybrid Supervision for Multi-Person Motion Prediction Uncertainty Reduction
[ "Yan Zhuang", "Yanlu Cai", "WEIZHONG ZHANG", "Cheng Jin" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=DjcQVopjzX
@inproceedings{ wang2024sfp, title={{SFP}: Spurious Feature-Targeted Pruning for Out-of-Distribution Generalization}, author={Yingchun Wang and Jingcai Guo and Song Guo and LIU Yi and Jie ZHANG and Weizhan Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=DjcQVopjzX} }
Recent studies reveal that even highly biased dense networks can contain an invariant substructure with superior out-of-distribution (OOD) generalization. While existing works commonly seek these substructures using global sparsity constraints, the uniform imposition of sparse penalties across samples with diverse levels of spurious contents renders such methods suboptimal. The precise adaptation of model sparsity, specifically tailored for spurious features, remains a significant challenge. Motivated by the insight that in-distribution (ID) data containing spurious features may exhibit lower experiential risk, we propose a novel **S**purious **F**eature-targeted **P**runing framework, dubbed **SFP**, to induce the authentic invariant substructures without referring to the above concerns. Specifically, SFP distinguishes spurious features within ID instances during training by a theoretically validated threshold. It then penalizes the corresponding feature projections onto the model space, steering the optimization towards subspaces spanned by those invariant factors. Moreover, we also conduct detailed theoretical analysis to provide a rationality guarantee and a proof framework for OOD structures based on model sparsity. Experiments on various OOD datasets show that SFP can significantly outperform both structure-based and non-structure-based OOD generalization SOTAs by large margins.
SFP: Spurious Feature-Targeted Pruning for Out-of-Distribution Generalization
[ "Yingchun Wang", "Jingcai Guo", "Song Guo", "LIU Yi", "Jie ZHANG", "Weizhan Zhang" ]
Conference
poster
2305.11615
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Dj6dKnqPU8
@inproceedings{ li2024farfusion, title={{FARF}usion V2: A Geometry-based Radar-Camera Fusion Method on the Ground for Roadside Far-Range 3D Object Detection}, author={Yao Li and Jiajun Deng and Yuxuan Xiao and Yingjie Wang and Xiaomeng Chu and Jianmin Ji and Yanyong Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Dj6dKnqPU8} }
Fusing the data of millimeter-wave Radar sensors and high-definition cameras has emerged as a viable approach to achieving precise 3D object detection for roadside traffic surveillance. For roadside perception systems, earlier studies have pointed out that it is better to perform the fusion on the 2D image plane than on the BEV plane (which is popular for on-car perception systems), especially when the perception range is large (e.g., > 150𝑚). Image-plane fusion requires critical transformations, like perspective projection from the Radar’s BEV to the camera’s 2D plane and reverse IPM. However, real-world issues like uneven terrain and sensor movement degrade these transformations’ precision, impacting fusion effectiveness. To alleviate these issues, we propose a geometry-based Radar-camera fusion method on the ground, namely FARFusion V2. Specifically, we extend the ground-plane assumption in FARFusion [20] to support arbitrary shapes by formulating the ground height as an implicit representation based on geometric transformations. By incorporating the ground information, we can enhance Radar data with target height measurements. Consequently, we can thus project the enhanced Radar data onto the 2D plane to obtain more accurate depth information, thereby assisting the IPM process. A real-time parameterized transformation parameters estimation module is further introduced to refine the view transformation processes. Moreover, considering various measurement noises across these two sensors, we introduce an uncertainty-based depth fusion strategy into the 2D fusion process to maximize the probability of obtaining the optimal depth value. Extensive experiments are conducted on our collected roadside OWL benchmark, demonstrating the excellent localization capacity of FARFusion V2 in far-range scenarios. Our method achieves an average location accuracy of 0.771m when we extend the detection range up to 500m.
FARFusion V2: A Geometry-based Radar-Camera Fusion Method on the Ground for Roadside Far-Range 3D Object Detection
[ "Yao Li", "Jiajun Deng", "Yuxuan Xiao", "Yingjie Wang", "Xiaomeng Chu", "Jianmin Ji", "Yanyong Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=DgNJ18FQLe
@inproceedings{ wang2024view, title={View Gap Matters: Cross-view Topology and Information Decoupling for Multi-view Clustering}, author={Fangdi Wang and Siwei Wang and Jiaqi Jin and Zhibin Dong and Xihong Yang and Yu Feng and Xinzhong Zhu and Tianrui Liu and Xinwang Liu and En Zhu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=DgNJ18FQLe} }
Multi-view clustering, a pivotal technology in multimedia research, aims to leverage complementary information from diverse perspectives to enhance clustering performance. The current multi-view clustering methods normally enforce the reduction of distances between any pair of views, overlooking the heterogeneity between views, thereby sacrificing the diverse and valuable insights inherent in multi-view data. In this paper, we propose a Tree-Based View-Gap Maintaining Multi-View Clustering (TGM-MVC) method. Our approach introduces a novel conceptualization of multiple views as a graph structure. In this structure, each view corresponds to a node, with the view gap, calculated by the cosine distance between views, acting as the edge. Through graph pruning, we derive the minimum spanning tree of the views, reflecting the neighbouring relationships among them. Specifically, we applied a share-specific learning framework, and generate view trees for both view-shared and view-specific information. Concerning shared information, we only narrow the distance between adjacent views, while for specific information, we maintain the view gap between neighboring views. Theoretical analysis highlights the risks of eliminating the view gap, and comprehensive experiments validate the efficacy of our proposed TGM-MVC method.
View Gap Matters: Cross-view Topology and Information Decoupling for Multi-view Clustering
[ "Fangdi Wang", "Siwei Wang", "Jiaqi Jin", "Zhibin Dong", "Xihong Yang", "Yu Feng", "Xinzhong Zhu", "Tianrui Liu", "Xinwang Liu", "En Zhu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=DeN82P7RIl
@inproceedings{ zhu2024icmapper, title={{IC}-Mapper: Instance-Centric Spatio-Temporal Modeling for Online Vectorized Map Construction}, author={Jiangtong Zhu and YangZhao and Yinan Shi and Jianwu Fang and Jianru Xue}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=DeN82P7RIl} }
Online vector map construction based on visual data can bypass the processes of data collection, post-processing, and manual annotation required by traditional map construction, which significantly enhances map-building efficiency. However, existing work treats the online mapping task as a local range perception task, overlooking the spatial scalability required for map construction. We propose \emph{IC-Mapper}, an instance-centric online mapping framework, which comprises two primary components: 1) \textbf{Instance-centric temporal association module:} For the detection queries of adjacent frames, we measure them in both feature and geometric dimensions to obtain the matching correspondence between instances across frames. 2) \textbf{Instance-centric spatial fusion module:} We perform point sampling on the historical global map from a spatial dimension and integrate it with the detection results of instances corresponding to the current frame to achieve real-time expansion and update of the map. Based on the nuScenes dataset, we evaluate our approach on detection, tracking, and global mapping metrics. Experimental results demonstrate the superiority of IC-Mapper against other state-of-the-art methods.
IC-Mapper: Instance-Centric Spatio-Temporal Modeling for Online Vectorized Map Construction
[ "Jiangtong Zhu", "YangZhao", "Yinan Shi", "Jianwu Fang", "Jianru Xue" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=DdSOZHNLmT
@inproceedings{ zhang2024rethinking, title={Rethinking the One-shot Object Detection: Cross-Domain Object Search}, author={Yupeng Zhang and Shuqi Zheng and Ruize Han and Yuzhong Feng and Junhui Hou and Linqi Song and Wei Feng and Liang Wan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=DdSOZHNLmT} }
One-shot object detection (OSOD) uses a query patch to identify the same category of object in a target image. As the OSOD setting, the target images are required to contain the object category of the query patch, and the image styles (domains) of the query patch and target images are always similar. However, in practical application, the above requirements are not commonly satisfied. Therefore, we propose a new problem namely Cross-Domain Object Search (CDOS), where the object categories of the query patch and target image are decoupled, and the image styles between them may also be significantly different. For this problem, we develop a new method, which incorporates both foreground-background contrastive learning heads and a domain-generalized feature augmentation technique. This makes our method effectively handle the object category gap and domain distribution gap, between the query patch and target image in the training and testing datasets. We further build a new benchmark for the proposed CDOS problem, on which our method shows significant performance improvements over the comparison methods.
Rethinking the One-shot Object Detection: Cross-Domain Object Search
[ "Yupeng Zhang", "Shuqi Zheng", "Ruize Han", "Yuzhong Feng", "Junhui Hou", "Linqi Song", "Wei Feng", "Liang Wan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=DdPe5ZLO11
@inproceedings{ wei2024qsnns, title={Q-{SNN}s: Quantized Spiking Neural Networks}, author={Wenjie Wei and Yu Liang and Ammar Belatreche and Yichen Xiao and Honglin Cao and Zhenbang Ren and Guoqing Wang and Malu Zhang and Yang Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=DdPe5ZLO11} }
Brain-inspired Spiking Neural Networks (SNNs) leverage sparse spikes to represent information and process them in an asynchronous event-driven manner, offering an energy-efficient paradigm for the next generation of machine intelligence. However, the current focus within the SNN community prioritizes accuracy optimization through the development of large-scale models, limiting their viability in resource-constrained and low-power edge devices. To address this challenge, we introduce a lightweight and hardware-friendly Quantized SNN (Q-SNN) that applies quantization to both synaptic weights and membrane potentials. By significantly compressing these two key elements, the proposed Q-SNNs substantially reduce both memory usage and computational complexity. Moreover, to prevent the performance degradation caused by this compression, we present a new Weight-Spike Dual Regulation (WS-DR) method inspired by information entropy theory. Experimental evaluations on various datasets, including static and neuromorphic, demonstrate that our Q-SNNs outperform existing methods in terms of both model size and accuracy. These state-of-the-art results in efficiency and efficacy suggest that the proposed method can significantly improve edge intelligent computing.
Q-SNNs: Quantized Spiking Neural Networks
[ "Wenjie Wei", "Yu Liang", "Ammar Belatreche", "Yichen Xiao", "Honglin Cao", "Zhenbang Ren", "Guoqing Wang", "Malu Zhang", "Yang Yang" ]
Conference
poster
2406.13672
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Da9j0EjdJe
@inproceedings{ huang2024voicetuner, title={VoiceTuner: Self-Supervised Pre-training and Efficient Fine-tuning For Voice Generation}, author={Rongjie Huang and Yongqi Wang and Ruofan Hu and Xiaoshan Xu and Zhiqing Hong and Dongchao Yang and Xize Cheng and Zehan Wang and Ziyue Jiang and Zhenhui Ye and Luping Liu and Siqi Zheng and Zhou Zhao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=Da9j0EjdJe} }
Voice large language models (LLMs) cast voice synthesis as a language modeling task in a discrete space, and have demonstrated significant progress to date. Despite the recent success, the current development of voice LLMs in low-resource applications is hampered by data scarcity and high computational cost. In this work, we propose VoiceTuner, with a self-supervised pre-training and efficient fine-tuning approach for low-resource voice generation. Specifically, 1) to mitigate data scarcity, we leverage large-scale unlabeled dataset and pre-train VoiceTuner-SSL without pre-defined applications, which can be fine-tuned in downstream tasks; 2) to further reduce the high training cost in complete fine-tuning, we introduce a multiscale transformer adapter to effectively update only around 1% parameters as a plug-and-play module. Experimental results demonstrate that VoiceTuner-SSL presents strong acoustic continuations, and VoiceTuner achieves state-of-the-art results in rich-resource TTS evaluation compared with competitive baseline models. Low-resource (1h, 10h, 30h) downstream applications including zero-shot TTS, instruction TTS, and singing voice synthesis present VoiceTuner's superior audio quality and style similarity with reduced data requirement and computational cost. Audio samples are available at https://VoiceTuner.github.io
VoiceTuner: Self-Supervised Pre-training and Efficient Fine-tuning For Voice Generation
[ "Rongjie Huang", "Yongqi Wang", "Ruofan Hu", "Xiaoshan Xu", "Zhiqing Hong", "Dongchao Yang", "Xize Cheng", "Zehan Wang", "Ziyue Jiang", "Zhenhui Ye", "Luping Liu", "Siqi Zheng", "Zhou Zhao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=DVm3Bk2eHh
@inproceedings{ zhang2024diffglue, title={DiffGlue: Diffusion-Aided Image Feature Matching}, author={Shihua Zhang and Jiayi Ma}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=DVm3Bk2eHh} }
As one of the most fundamental computer vision problems, image feature matching aims to establish correct correspondences between two-view images. Existing studies enhance the descriptions of feature points with graph neural network (GNN), identifying correspondences with the predicted assignment matrix. However, this pipeline easily falls into a suboptimal result during training for the solution space is extremely complex, and is inaccessible to the prior that can guide the information propagation and network convergence. In this paper, we propose a novel method called DiffGlue that introduces the Diffusion Model into the sparse image feature matching framework. Concretely, based on the incrementally iterative diffusion and denoising processes, DiffGlue can be guided by the prior from the Diffusion Model and trained step by step on the optimization path, approaching the optimal solution progressively. Besides, it contains a special Assignment-Guided Attention as a bridge to merge the Diffusion Model and sparse image feature matching, which injects the inherent prior into GNN thereby ameliorating the message delivery. Extensive experiments reveal that DiffGlue converges faster and better, outperforming state-of-the-arts on several applications such as homography estimation, relative pose estimation, and visual localization. The code is available at https://github.com/SuhZhang/DiffGlue.
DiffGlue: Diffusion-Aided Image Feature Matching
[ "Shihua Zhang", "Jiayi Ma" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=DN3722rnLd
@inproceedings{ li2024cad, title={{CAD} Translator: An Effective Drive for Text to 3D Parametric Computer-Aided Design Generative Modeling}, author={Xueyang Li and Yu Song and Yunzhong Lou and Xiangdong Zhou}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=DN3722rnLd} }
Computer-Aided Design (CAD) generative modeling is widely applicable in the fields of industrial engineering. Recently, text-to-3D generation has shown rapid progress in point clouds, mesh, and other non-parametric representations. On the contrary, text to 3D parametric CAD generative modeling is a practical task that has not been explored well, where its shape can be defined with several editable parametric command sequences. To investigate this, we design an encoder-decoder framework, namely CAD Translator, for incorporating the awareness of parametric CAD sequences into texts appropriately with only one-stage training. We first align texts and parametric CAD sequences via a Cascading Contrastive Strategy in the latent space, and then we propose CT-Mix to conduct the random mask operation on their embeddings separately to further get a fusion embedding via the linear interpolation. This can strengthen the connection between texts and parametric CAD sequences effectively. To train CAD Translator, we create a Text2CAD dataset with the help of Large Multimodal Model (LMM) for this practical task and conduct thorough experiments to demonstrate the effectiveness of our method.
CAD Translator: An Effective Drive for Text to 3D Parametric Computer-Aided Design Generative Modeling
[ "Xueyang Li", "Yu Song", "Yunzhong Lou", "Xiangdong Zhou" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=DM5eaZ6Ect
@inproceedings{ xu2024point, title={Point Cloud Reconstruction Is Insufficient to Learn 3D Representations}, author={Weichen Xu and Jian Cao and Tianhao Fu and Ruilong Ren and Zicong Hu and Xixin Cao and Xing Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=DM5eaZ6Ect} }
This paper revisits the development of generative self-supervised learning in 2D images and 3D point clouds in autonomous driving. In 2D images, the pretext task has evolved from low-level to high-level features. Inspired by this, through explore model analysis, we find that the gap in weight distribution between self-supervised learning and supervised learning is substantial when employing only low-level features as the pretext task in 3D point clouds. Low-level features represented by PoInt Cloud reconsTruction are insUfficient to learn 3D REpresentations (dubbed PICTURE). To advance the development of pretext tasks, we propose a unified generative self-supervised framework. Firstly, high-level features represented by the Seal features are demonstrated to exhibit semantic consistency with downstream tasks. We utilize the Seal voxel features as an additional pretext task to enhance the understanding of semantic information during the pre-training. Next, we propose inter-class and intra-class discrimination-guided masking (I$^2$Mask) based on the attributes of the Seal voxel features, adaptively setting the masking ratio for each superclass. On Waymo and nuScenes datasets, we achieve 75.13\% mAP and 72.69\% mAPH for 3D object detection, 79.4\% mIoU for 3D semantic segmentation, and 18.4\% mIoU for occupancy prediction. Extensive experiments have demonstrated the effectiveness and necessity of high-level features. The project page is available at https://anonymous-picture.github.io/.
Point Cloud Reconstruction Is Insufficient to Learn 3D Representations
[ "Weichen Xu", "Jian Cao", "Tianhao Fu", "Ruilong Ren", "Zicong Hu", "Xixin Cao", "Xing Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=DLzBBhZ93l
@inproceedings{ wang2024devil, title={Devil is in Details: Locality-Aware 3D Abdominal {CT} Volume Generation for Organ Segmentation}, author={Yuran Wang and Zhijing Wan and Yansheng Qiu and Zheng Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=DLzBBhZ93l} }
In the realm of medical image analysis, self-supervised learning techniques (SSL) have emerged to alleviate labeling demands, while still facing the challenge of training data scarcity owing to escalating resource requirements and privacy constraints. Numerous efforts employ generative models to generate high-fidelity, unlabeled 3D volumes across diverse modalities and anatomical regions. However, the intricate and indistinguishable anatomical structures within the abdomen pose a unique challenge to abdominal CT volume generation compared to other anatomical regions. To address the overlooked challenge, we introduce the Locality-Aware Diffusion (Lad), a novel method tailored for exquisite 3D abdominal CT volume generation. We design a locality loss to refine crucial anatomical regions and devise a condition extractor to integrate abdominal priori into generation, thereby enabling the generation of large quantities of high-quality abdominal CT volumes essential for SSL tasks without the need for additional data such as labels or radiology reports. Volumes generated through our method demonstrate remarkable fidelity in reproducing abdominal structures, achieving a decrease in FID score from 0.0034 to 0.0002 on AbdomenCT-1K dataset, closely mirroring authentic data and surpassing current methods. Extensive experiments demonstrate the effectiveness of our method in self-supervised organ segmentation tasks, resulting in an improvement in mean Dice scores on two abdominal datasets effectively. These results underscore the potential of synthetic data to advance self-supervised learning in medical image analysis.
Devil is in Details: Locality-Aware 3D Abdominal CT Volume Generation for Organ Segmentation
[ "Yuran Wang", "Zhijing Wan", "Yansheng Qiu", "Zheng Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=D969j33VyX
@inproceedings{ yu2024semgir, title={Sem{GIR}: Semantic-Guided Image Regeneration based method for {AI}-generated Image Detection and Attribution}, author={Xiao Yu and Kejiang Chen and Kai Zeng and Han Fang and Zijin Yang and Xiuwei Shang and Yuang Qi and Weiming Zhang and Nenghai Yu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=D969j33VyX} }
The rapid development of image generative models has lowered the threshold for image creation but also raised security concerns related to the propagation of false information, urgently necessitating the development of detection technologies for AI-generated images. Presently, text-to-image generation stands as the predominant approach to image generation, where the rendering of generated images hinges on two primary factors: text prompts and the inherent characteristics of the model. However, the variety of semantic text prompts yields diverse generated images, posing significant challenges to existing detection methodologies that rely solely on learning from image features, particularly in scenarios with limited samples. To tackle these challenges, this paper presents a novel perspective on the AI-generated image detection task, advocating for detection under semantic-decoupling conditions. Building upon this insight, we propose SemGIR, a semantic-guided image regeneration based method for AI-generated image detection. SemGIR first regenerates images through image-to-text followed by a text-to-image generation process, subsequently utilizing these re-generated image pairs to derive discriminative features. This regeneration process effectively decouples semantic features organically, allowing the detection process to concentrate more on the inherent characteristics of the generative model. Such an efficient detection scheme can also be effectively applied to attribution. Experimental findings demonstrate that in realistic scenarios with limited samples, SemGIR achieves an average detection accuracy 15.76\% higher than state-of-the-art (SOTA) methods. Furthermore, in attribution experiments on the SDv2.1 model, SemGIR attains an accuracy exceeding 98\%, affirming the effectiveness and practical utility of the proposed method.
SemGIR: Semantic-Guided Image Regeneration based method for AI-generated Image Detection and Attribution
[ "Xiao Yu", "Kejiang Chen", "Kai Zeng", "Han Fang", "Zijin Yang", "Xiuwei Shang", "Yuang Qi", "Weiming Zhang", "Nenghai Yu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=D88kAvlmQJ
@inproceedings{ xiao2024bridging, title={Bridging Fourier and Spatial-Spectral Domains for Hyperspectral Image Denoising}, author={Jiahua Xiao and Yang Liu and Shizhou Zhang and Xing Wei}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=D88kAvlmQJ} }
Remarkable progresses have been made in hyperspectral image (HSI) denoising. However, the majority of existing methods are predominantly confined to the spatial-spectral domain, overlooking the untapped potential inherent in the Fourier domain. This paper presents a novel approach to address HSI denoising by bridging the information from the Fourier and spatial-spectral domains. Our method highlights key insights into the Fourier properties within spatial and spectral domains through the Fourier transform. Specifically, we note that the amplitude predominantly encodes noise and photon reflection characteristics, while the phase holds structural information. Additionally, the Fourier transform offers a receptive field that spans the entire image, enabling effective global noise distribution capture. These insights unveil new perspectives on the physical properties of HSIs, motivating us to leverage complementary information exchange between Fourier and spatial-spectral domains. To this end, we introduce the Fourier-prior Integration Denoising Network (FIDNet), a potent yet straightforward approach that utilizes Fourier insights to synergistically interact with spatial-spectral domains for superior HSI denoising. In FIDNet, we independently extract spatial and Fourier features through dual branches and merge these representations to enhance spectral evolution modeling through the inherent structure consistency constraints and continuing reflection variation revealed in Fourier prior. Our proposed method demonstrates robust generalization across synthetic and real-world benchmark datasets, outperforming state-of-the-art methods in both quantitative quality and visual results.
Bridging Fourier and Spatial-Spectral Domains for Hyperspectral Image Denoising
[ "Jiahua Xiao", "Yang Liu", "Shizhou Zhang", "Xing Wei" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=D4d0sy7TFJ
@inproceedings{ xiang2024semanticaware, title={Semantic-Aware and Quality-Aware Interaction Network for Blind Video Quality Assessment}, author={Jianjun Xiang and Yuanjie Dang and Peng Chen and Ronghua Liang and Ruohong Huan and Nan Gao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=D4d0sy7TFJ} }
Current state-of-the-art video quality assessment (VQA) models typically integrate various perceptual features to comprehensively represent video quality degradation. These models either directly concatenate features or fuse different perceptual scores while ignoring the domain gaps between cross-aware features, thus failing to adequately learn the correlations and interactions between different perceptual features. To this end, we analyze the independent effects and information gaps of quality- and semantic-aware features on video quality. Based on an analysis of the spatial and temporal differences between two aware features, we proposed a semantic-**A**ware and quality-**A**ware **I**nteraction **Net**work (**A$^2$INet**) for blind VQA (BVQA). For spatial gaps, we introduce a cross-aware guided interaction module to enhance the interaction between semantic- and quality-aware features in a local-to-global manner. Considering temporal discrepancies, we design a cross-aware temporal modeling module to further perceive temporal content variation and quality saliency information, and perceptual features are regressed into quality score by a temporal network and a temporal pooling. Extensive experiments on six benchmark VQA datasets show that our model achieves state-of-the-art performance, and ablation studies further validate the effectiveness of each module. We also present a simple video sampling strategy to balance the effectiveness and efficiency of the model. The code for the proposed method will be released.
Semantic-Aware and Quality-Aware Interaction Network for Blind Video Quality Assessment
[ "Jianjun Xiang", "Yuanjie Dang", "Peng Chen", "Ronghua Liang", "Ruohong Huan", "Nan Gao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=D4cTfcaHOc
@inproceedings{ zhang2024partlevel, title={Part-level Reconstruction for Self-Supervised Category-level 6D Object Pose Estimation with Coarse-to-Fine Correspondence Optimization}, author={Zerui Zhang and Jun Yu and Liangxian Cui and Qiang Ling and TianyuLiu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=D4cTfcaHOc} }
Self-supervised category-level 6D pose estimation stands as a fundamental task in computer vision. Nonetheless, existing methods encounter the following challenges: 1) They are impacted by the many-to-one ambiguity in the correspondences between pixels and point clouds. 2) Existing networks struggle to reconstruct precise object models due to the significant part-level shape variations among specific categories. To address these issues, we propose a novel method based on a Coarse-to-Fine Correspondence Optimization (\textbf{CFCO}) module and a Part-level Shape Reconstruction (\textbf{PSR}) module. In the \textbf{CFCO} module, we employ Hungarian matching to generate one-to-one pseudo labels at both region and pixel levels, providing explicit supervision for the corresponding similarity matrices. In the \textbf{PSR} module, we introduce a part-level discrete shape memory to capture more fine-grained shape variations of different objects and utilize it to perform precise reconstruction. We evaluate our method on the REAL275 and WILD6D datasets. Extensive experiments demonstrate that our method outperforms existing methods, achieving new state-of-the-art results.
Part-level Reconstruction for Self-Supervised Category-level 6D Object Pose Estimation with Coarse-to-Fine Correspondence Optimization
[ "Zerui Zhang", "Jun Yu", "Liangxian Cui", "Qiang Ling", "TianyuLiu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=D3eRo2L1CV
@inproceedings{ jia2024mos, title={MoS\${\textasciicircum}2\$: Mixture of Scale and Shift Experts for Text-Only Video Captioning}, author={Heng Jia and Yunqiu Xu and Linchao Zhu and Guang Chen and Yufei Wang and Yi Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=D3eRo2L1CV} }
Video captioning is a challenging task and typically requires video-text paired data for training. However, manually annotating coherent textual descriptions for videos is laborious and time-consuming. To address this problem, we propose to utilize solely text data to enhance video captioning models. Drawing inspiration from the exceptional text generation capabilities demonstrated by large language models (LLMs), we aim to leverage these models to generate high-quality and high-diversity video captions for the target domain. Specifically, we prompt GPT-4 with few-shot target-domain captions to generate a limited set of plausible video captions. Subsequently, we continue to prompt GPT-4 with the generated captions to acquire large-scale captions. To fully exploit the generated captions, we propose a Mixture of Scale and Shift experts (MoS$^2$) for efficient adaptation of pre-trained image captioning models for video captioning. MoS$^2$ estimates a probability distribution over a collection of experts by a lightweight routing network, determining the allocation of tokens to appropriate experts. This dynamic adjustment mechanism allows for specific responses to input features, thereby enhancing the model's ability to handle data variations. Our approach not only customizes model responses to input variations, effectively addressing the distribution shift between synthetic and actual captions but also significantly reduces the number of learnable parameters, allowing for more efficient adaptations. With only text data, we achieve superior performance and significantly narrow the performance gap between zero-shot and fine-tuned models. Our method boosts video captioning performance with the synthetic text data, thus substantially alleviating the dependence on paired and large-scale real data of the target domain. The code will be publicly available.
MoS^2: Mixture of Scale and Shift Experts for Text-Only Video Captioning
[ "Heng Jia", "Yunqiu Xu", "Linchao Zhu", "Guang Chen", "Yufei Wang", "Yi Yang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=CyMJ9capAZ
@inproceedings{ mi2024clifvqa, title={{CL}iF-{VQA}: Enhancing Video Quality Assessment by Incorporating High-Level Semantic Information related to Human Feelings}, author={Yachun Mi and Yan Shu and Yu Li and Chen Hui and Puchao Zhou and Shaohui Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=CyMJ9capAZ} }
Video Quality Assessment (VQA) aims to simulate the process of perceiving video quality by the Human Visual System (HVS). Although subjective studies have shown that the judgments of HVS are strongly influenced by human feelings, it remains unclear how video content relates to human feelings. The recent rapid development of Vision-Language pre-trained models (VLM) has established a solid link between language and vision. And human feelings can be accurately described by language, which means that VLM can extract information related to human feelings from visual content with linguistic prompts. In this paper, we propose CLiF-VQA, which innovatively utilizes the visual linguistic capabilities of VLM to introduce human feelings features based on traditional spatio-temporal features to more accurately simulate the perceptual process of HVS. In order to efficiently extract features related to human feelings from videos, we pioneer the exploration of the consistency between Contrastive Language-Image Pre-training (CLIP) and human feelings in video perception. In addition, we design effective prompts, i.e., a variety of objective and subjective descriptions closely related to human feelings, as prompts. Extensive experiments show that the proposed CLiF-VQA exhibits excellent performance on several VQA datasets. The results show that introducing human feelings features on top of spatio-temporal features is an effective way to obtain better performance.
CLiF-VQA: Enhancing Video Quality Assessment by Incorporating High-Level Semantic Information related to Human Feelings
[ "Yachun Mi", "Yan Shu", "Yu Li", "Chen Hui", "Puchao Zhou", "Shaohui Liu" ]
Conference
poster
2311.07090
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=CxQXKqiAw8
@inproceedings{ jin2024headsetoff, title={HeadSetOff: Enabling Photorealistic Video Conferencing on Economical {VR} Headsets}, author={Yili Jin and Duan Xize and Fangxin Wang and Xue Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=CxQXKqiAw8} }
Virtual Reality (VR) headsets have become increasingly popular for remote collaboration, but video conferencing poses challenges when the user's face is covered by the headset. Existing solutions have limitations in terms of accessibility. In this paper, we propose HeadsetOff, a novel system that achieves photorealistic video conferencing on economical VR headsets by leveraging voice-driven face reconstruction. HeadsetOff consists of three main components: a multimodal attention-based predictor, a generator, and an adaptive controller. The predictor effectively predicts user future behavior based on different modalities. The generator employs voice input, head motion, and eye blink to animate the human face. The adaptive controller dynamically selects the appropriate generator model based on the trade-off between video quality and delay, aiming to maximize Quality of Experience while minimizing latency. Experimental results demonstrate the effectiveness of HeadsetOff in achieving high-quality, low-latency video conferencing on economical VR headsets.
HeadSetOff: Enabling Photorealistic Video Conferencing on Economical VR Headsets
[ "Yili Jin", "Duan Xize", "Fangxin Wang", "Xue Liu" ]
Conference
oral
2407.19988
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0