bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 215
445
| abstract
stringlengths 820
2.37k
| title
stringlengths 24
147
| authors
sequencelengths 1
13
⌀ | id
stringclasses 1
value | type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 33
values | n_linked_authors
int64 -1
4
| upvotes
int64 -1
21
| num_comments
int64 -1
4
| n_authors
int64 -1
11
| Models
sequencelengths 0
1
| Datasets
sequencelengths 0
1
| Spaces
sequencelengths 0
4
| old_Models
sequencelengths 0
1
| old_Datasets
sequencelengths 0
1
| old_Spaces
sequencelengths 0
4
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=TqRqTw3itr | @inproceedings{
wang2024siaovd,
title={{SIA}-{OVD}: Shape-Invariant Adapter for Bridging the Image-Region Gap in Open-Vocabulary Detection},
author={Zishuo Wang and Wenhao Zhou and Jinglin Xu and Yuxin Peng},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=TqRqTw3itr}
} | Open-vocabulary detection (OVD) aims to detect novel objects without instance-level annotations to achieve open-world object detection at a lower cost. Existing OVD methods mainly rely on the powerful open-vocabulary image-text alignment capability of Vision-Language Pretrained Models (VLM) such as CLIP. However, CLIP is trained on image-text pairs and lacks the perceptual ability for local regions within an image, resulting in the gap between image and region representations. Directly using CLIP for OVD causes inaccurate region classification. We find the image-region gap is primarily caused by the deformation of region feature maps during region of interest (RoI) extraction. To mitigate the inaccurate region classification in OVD, we propose a new Shape-Invariant Adapter named SIA-OVD to bridge the image-region gap in the OVD task. SIA-OVD learns a set of feature adapters for regions with different shapes and designs a new adapter allocation mechanism to select the optimal adapter for each region. The adapted region representations can align better with text representations learned by CLIP. Extensive experiments demonstrate that SIA-OVD effectively improves the classification accuracy for regions by addressing the gap between images and regions caused by shape deformation. SIA-OVD achieves substantial improvements over representative methods on the COCO-OVD benchmark. The code is available at https://github.com/PKU-ICST-MIPL/SIA-OVD_ACMMM2024. | SIA-OVD: Shape-Invariant Adapter for Bridging the Image-Region Gap in Open-Vocabulary Detection | [
"Zishuo Wang",
"Wenhao Zhou",
"Jinglin Xu",
"Yuxin Peng"
] | Conference | poster | 2410.05650 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=TndpJ9Xi8k | @inproceedings{
zhang2024an,
title={An In-depth Study of Bandwidth Allocation across Media Sources in Video Conferencing},
author={Zejun Zhang and Xiao Zhu and Anlan Zhang and Feng Qian},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=TndpJ9Xi8k}
} | Video Conferencing Applications (VCAs) are indispensable for real-time communication in remote work and education by enabling simultaneous transmission of audio, video, and screen-sharing content. Despite their ubiquity, there is a noticeable lack of research on how these platforms allocate resources, especially under limited bandwidth constraints, and how these resource allocation strategies affect the Quality of Experience (QoE). This paper addresses this research gap by conducting an in-depth analysis of bandwidth allocation strategies among prominent VCAs, including Zoom, Webex, and Google Meet, with an emphasis on their implications for QoE. To assess QoE effectively, we propose a general QoE model based on data collected from a user study involving over 800 participants. This study marks a pioneering effort in the extensive evaluation of multimedia transmissions across diverse scenarios for VCAs, representing a significant advancement over prior research that predominantly concentrated on the quality assessment of singular media types. The promising outcomes highlight the model's effectiveness and generalization in accurately predicting Quality of Experience (QoE) across various scenarios among VCAs. | An In-depth Study of Bandwidth Allocation across Media Sources in Video Conferencing | [
"Zejun Zhang",
"Xiao Zhu",
"Anlan Zhang",
"Feng Qian"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Tl13I7b3Ao | @inproceedings{
han2024mambad,
title={Mamba3D: Enhancing Local Features for 3D Point Cloud Analysis via State Space Model},
author={Xu Han and Yuan Tang and Zhaoxuan Wang and Xianzhi Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Tl13I7b3Ao}
} | Existing Transformer-based models for point cloud analysis suffer from quadratic complexity, leading to compromised point cloud resolution and information loss. In contrast, the newly proposed Mamba model, based on state space models (SSM), outperforms Transformer in multiple areas with only linear complexity. However, the straightforward adoption of Mamba does not achieve satisfactory performance on point cloud tasks. In this work, we present Mamba3D, a state space model tailored for point cloud learning to enhance local feature extraction, achieving superior performance, high efficiency, and scalability potential. Specifically, we propose a simple yet effective Local Norm Pooling (LNP) block to extract local geometric features. Additionally, to obtain better global features, we introduce a bidirectional SSM (bi-SSM) with both a token forward SSM and a novel backward SSM that operates on the feature channel. Extensive experimental results show that Mamba3D surpasses Transformer-based counterparts and concurrent works in multiple tasks, with or without pre-training. Notably, Mamba3D achieves multiple SoTA, including an overall accuracy of 92.6% (train from scratch) on the ScanObjectNN and 95.1% (with single-modal pre-training) on the ModelNet40 classification task, with only linear complexity. We shall release the code and model upon publication of this work. | Mamba3D: Enhancing Local Features for 3D Point Cloud Analysis via State Space Model | [
"Xu Han",
"Yuan Tang",
"Zhaoxuan Wang",
"Xianzhi Li"
] | Conference | poster | 2404.14966 | [
"https://github.com/xhanxu/Mamba3D"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=TjFn6xktTm | @inproceedings{
ren2024crossclass,
title={Cross-Class Domain Adaptive Semantic Segmentation with Visual Language Models},
author={Wenqi Ren and Ruihao Xia and Meng Zheng and Ziyan Wu and Yang Tang and Nicu Sebe},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=TjFn6xktTm}
} | This paper addresses the issue of cross-class domain adaptation (CCDA) in semantic segmentation, where the target domain contains both shared and novel classes that are either unlabeled or unseen in the source domain. This problem is challenging, as the absence of labels for novel classes hampers the accurate segmentation of both shared and novel classes. Since Visual Language Models (VLMs) are capable of generating zero-shot predictions without requiring task-specific training examples, we propose a label alignment method by leveraging VLMs to relabel pseudo labels for novel classes. Considering that VLMs typically provide only image-level predictions, we embed a two-stage method to enable fine-grained semantic segmentation and design a threshold based on the uncertainty of pseudo labels to exclude noisy VLM predictions. To further augment the supervision of novel classes, we devise memory banks with an adaptive update scheme to effectively manage accurate VLM predictions, which are then resampled to increase the sampling probability of novel classes. Through comprehensive experiments, we demonstrate the effectiveness and versatility of our proposed method across various CCDA scenarios. | Cross-Class Domain Adaptive Semantic Segmentation with Visual Language Models | [
"Wenqi Ren",
"Ruihao Xia",
"Meng Zheng",
"Ziyan Wu",
"Yang Tang",
"Nicu Sebe"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Tc51JqJzq6 | @inproceedings{
yin2024cso,
title={{CSO}: Constraint-guided Space Optimization for Active Scene Mapping},
author={Xuefeng Yin and Chenyang Zhu and Shanglai Qu and Yuqi Li and Kai Xu and Baocai Yin and Xin Yang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Tc51JqJzq6}
} | Simultaneously mapping and exploring a complex unknown scene is an NP-hard problem, which is still challenging with the rapid development of deep learning techniques. We present CSO, a deep reinforcement learning-based framework for efficient active scene mapping. Constraint-guided space optimization is adopted for both state and critic space to reduce the difficulty of finding the global optimal explore path and avoid long-distance round trips while exploring. We first take the frontiers-based entropy as the input constraint with the raw observation into the network, which guides the training start from imitating the local greedy searching. However, the entropy-based optimization can easily get stuck with few local optimal or cause inefficient round trips since the entropy space and the real world do not share the same metric. Inspired by constrained reinforcement learning, we then introduce an action mask-based optimization constraint to align the metric of these two spaces. Exploration optimization in aligned spaces can avoid long-distance round trips more effectively. We evaluate our method with a ground robot in 29 complex indoor scenes with different scales. Our method can perform 19.16\% more exploration efficiency and 3.12\% more exploration completeness on average compared to the state-of-the-art alternatives. We also implement our method in real-world scenes that can efficiently explore an area of 649 $m^2$. The experiment video can be found in the supplementary material. | CSO: Constraint-guided Space Optimization for Active Scene Mapping | [
"Xuefeng Yin",
"Chenyang Zhu",
"Shanglai Qu",
"Yuqi Li",
"Kai Xu",
"Baocai Yin",
"Xin Yang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=TYbuVVM3iR | @inproceedings{
luo2024codeswap,
title={CodeSwap: Symmetrically Face Swapping Based on Prior Codebook},
author={Xiangyang Luo and Xin Zhang and Yifan Xie and Xinyi Tong and Weijiang Yu and Heng Chang and Fei Ma and Fei Richard Yu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=TYbuVVM3iR}
} | Face swapping, the technique of transferring the identity from one face to another, merges as a field with significant practical applications. However, previous swapping methods often result in visible artifacts. To address this issue, in our paper, we propose $CodeSwap$, a symmetrical framework to achieve face swapping with high-fidelity and realism. Specifically, our method firstly utilizes a codebook that captures the knowledge of high quality facial features. Building on this foundation, the face swapping is then converted into the code manipulation task in a code space. To achieve this, we design a Transformer-based architecture to update each code independently, which enable more precise manipulations. Furthermore, we incorporate a mask generator to achieve seamless blending of the generated face with the background of target image.
A distinctive characteristic of our method is its symmetrical approach to processing both target and source images, simultaneously extracting information from each to improve the quality of face swapping.
This symmetry also simplifies the bidirectional exchange of faces in a singular operation.
Through extensive experiments on ClelebA-HQ and FF++, our method is proven to not only achieve efficient identity transfer but also substantially reduce the visible artifacts. | CodeSwap: Symmetrically Face Swapping Based on Prior Codebook | [
"Xiangyang Luo",
"Xin Zhang",
"Yifan Xie",
"Xinyi Tong",
"Weijiang Yu",
"Heng Chang",
"Fei Ma",
"Fei Richard Yu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=TWzfIkho4e | @inproceedings{
nie2024frade,
title={{FRADE}: Forgery-aware Audio-distilled Multimodal Learning for Deepfake Detection},
author={Fan Nie and Jiangqun Ni and Jian Zhang and Bin Zhang and Weizhe Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=TWzfIkho4e}
} | Nowadays, the abuse of AI-generated content (AIGC), especially the facial images known as deepfake, on social networks has raised severe security concerns, which might involve the manipulations of both visual and audio signals. For multimodal deepfake detection, previous methods usually exploit forgery-relevant knowledge to fully finetune Vision transformers (ViTs) and perform cross-modal interaction to expose the audio-visual inconsistencies. However, these approaches may undermine the prior knowledge of pretrained ViTs and ignore the domain gap between different modalities, resulting in unsatisfactory performance. To tackle these challenges, in this paper, we propose a new framework, i.e., Forgery-aware Audio-distilled Multimodal Learning (FRADE), for deepfake detection. In FRADE, the parameters of pretrained ViT are frozen to preserve its prior knowledge, while two well-devised learnable components, i.e., the Adaptive Forgery-aware Injection (AFI) and Audio-distilled Cross-modal Interaction (ACI), are leveraged to adapt forgery relevant knowledge. Specifically, AFI captures high-frequency discriminative features on both audio and visual signals and injects them into ViT via the self-attention layer. Meanwhile, ACI employs a set of latent tokens to distill audio information, which could bridge the domain gap between audio and visual modalities. The ACI is then used to well learn the inherent audio-visual relationships by cross-modal interaction. Extensive experiments demonstrate that the proposed framework could outperform other state-of-the-art multimodal deepfake detection methods under various circumstances. | FRADE: Forgery-aware Audio-distilled Multimodal Learning for Deepfake Detection | [
"Fan Nie",
"Jiangqun Ni",
"Jian Zhang",
"Bin Zhang",
"Weizhe Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=TSuwWHORY2 | @inproceedings{
sun2024autoacd,
title={Auto-{ACD}: A Large-scale Dataset for Audio-Language Representation Learning},
author={Luoyi Sun and Xuenan Xu and Mengyue Wu and Weidi Xie},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=TSuwWHORY2}
} | Recently, the AI community has made significant strides in developing powerful foundation models, driven by large-scale multimodal datasets. However, for audio representation learning, the present datasets suffer from limitations in the following aspects: insufficient volume, simplistic content, and arduous collection procedures. To establish an audio dataset with high-quality captions, we propose an innovative, automatic approach leveraging multimodal inputs, such as video frames, audio streams. Specifically, we construct a large-scale, high-quality, audio-language dataset, named as Auto-ACD, comprising over 1.5M audio-text pairs. We exploit a series of pre-trained models or APIs, to determine audio-visual synchronisation, generate image captions, object detection, or audio tags for specific videos. Subsequently, we employ LLM to paraphrase a congruent caption for each audio, guided by the extracted multi-modality clues. To demonstrate the effectiveness of the proposed dataset, we train widely used models on our dataset and show performance improvement on various downstream tasks, namely, audio-language retrieval, audio captioning, zero-shot classification. In addition, we establish a novel benchmark with environmental information and provide a benchmark for audio-text tasks. | Auto-ACD: A Large-scale Dataset for Audio-Language Representation Learning | [
"Luoyi Sun",
"Xuenan Xu",
"Mengyue Wu",
"Weidi Xie"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=TOMVFf5L6Q | @inproceedings{
liu2024dualmodeling,
title={Dual-Modeling Decouple Distillation for Unsupervised Anomaly Detection},
author={Xinyue Liu and Jianyuan Wang and Biao Leng and Shuo Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=TOMVFf5L6Q}
} | Knowledge distillation based on student-teacher network is one of the mainstream solution paradigms for the challenging unsupervised Anomaly Detection task, utilizing the difference in representation capabilities of the teacher and student networks to implement anomaly localization. However, over-generalization of the student network to the teacher network may lead to negligible differences in representation capabilities of anomaly, thus affecting the detection effectiveness. Existing methods address the possible over-generalization by using differentiated students and teachers from the structural perspective or explicitly expanding distilled information from the content perspective, which inevitably result in an increased likelihood of underfitting of the student network and poor anomaly detection capabilities in anomaly center or edge. In this paper, we propose Dual-Modeling Decouple Distillation (DMDD) for the unsupervised anomaly detection. In DMDD, a Decouple Student-Teacher Network is proposed to decouple the initial student features into normality and abnormality features. We further introduce Dual-Modeling Distillation based on normal-anomaly image pairs, fitting normality features of anomalous image and the teacher features of the corresponding normal image, widening the distance between abnormality features and the teacher features in anomalous regions. Synthesizing these two distillation ideas, we achieve anomaly detection which focuses on both edge and center of anomaly. Finally, a Multi-perception Segmentation Network is proposed to achieve focused anomaly map fusion based on multiple attention. Experimental results on MVTec AD show that DMDD surpasses SOTA localization performance of previous knowledge distillation-based methods, reaching 98.85% on pixel-level AUC and 96.13% on PRO. | Dual-Modeling Decouple Distillation for Unsupervised Anomaly Detection | [
"Xinyue Liu",
"Jianyuan Wang",
"Biao Leng",
"Shuo Zhang"
] | Conference | poster | 2408.03888 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=TMeLmiQOTk | @inproceedings{
zhong2024urbancross,
title={UrbanCross: Enhancing Satellite Image-Text Retrieval with Cross-Domain Adaptation},
author={Siru Zhong and Xixuan Hao and Yibo Yan and Ying Zhang and Yangqiu Song and Yuxuan Liang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=TMeLmiQOTk}
} | Urbanization challenges underscore the necessity for effective satellite image-text retrieval methods to swiftly access specific information enriched with geographic semantics for urban applications. However, existing methods often overlook significant domain gaps across diverse urban landscapes, primarily focusing on enhancing retrieval performance within single domains. To tackle this issue, we present UrbanCross, a new framework for cross-domain satellite image-text retrieval. UrbanCross leverages a high-quality, cross-domain dataset enriched with extensive geo-tags from three countries to highlight domain diversity. It employs the Large Multimodal Model (LMM) for textual refinement and the Segment Anything Model (SAM) for visual augmentation, achieving a fine-grained alignment of images, segments and texts, yielding a 10\% improvement in retrieval performance. Additionally, UrbanCross incorporates an adaptive curriculum-based source sampler and a weighted adversarial cross-domain fine-tuning module, progressively enhancing adaptability across various domains. Extensive experiments confirm UrbanCross's superior efficiency in retrieval and adaptation to new urban environments, demonstrating an average performance increase of 15\% over its version without domain adaptation mechanisms, effectively bridging the domain gap. Our code is publicly accessible, and the dataset will be made available at https://anonymous.4open.science/r/UrbanCross/. | UrbanCross: Enhancing Satellite Image-Text Retrieval with Cross-Domain Adaptation | [
"Siru Zhong",
"Xixuan Hao",
"Yibo Yan",
"Ying Zhang",
"Yangqiu Song",
"Yuxuan Liang"
] | Conference | poster | 2404.14241 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=TKRqWQVawP | @inproceedings{
ma2024automatic,
title={Automatic and Aligned Anchor Learning Strategy for Multi-View Clustering},
author={Huimin Ma and Siwei Wang and Shengju Yu and Suyuan Liu and Jun-Jie Huang and Huijun Wu and Xinwang Liu and En Zhu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=TKRqWQVawP}
} | Multi-view Clustering (MVC) generally utilizes the anchor technique to decrease the computational complexity so as to tackle large-scale scenarios. Existing researches generally are supposed to select anchors in advance to complete the next clustering task. Nevertheless, the number of anchors cannot be predetermined and must be selected as a parameter, which introduces additional time consumption for parameter search. Moreover, maintaining an identical number of anchors across each view is not reasonable, as it restricts the representational capacity of anchors in individual views. To address the above issues, we propose a view adaptive anchor multi-view clustering called Multi-view Clustering with Automatic and Aligned Anchor (3AMVC). We introduce the Hierarchical Bipartite Neighbor Clustering (HBNC) strategy to adaptively select a suitable number of representative anchors from the original samples of each view. Specifically, When the representative difference of anchors lies in a acceptable and satisfactory range, the HBNC process is halted and picks out the final anchors. In addition, in response to the varying quantities of anchors across different views, we propose an innovative anchor alignment strategy. This approach initially evaluates the quality of anchors on each view based on the intra-cluster distance criterion and then proceeds to align based on the view with the highest-quality anchors. The carefully organized experiments well validate the effectiveness and strengthens of 3AMVC. | Automatic and Aligned Anchor Learning Strategy for Multi-View Clustering | [
"Huimin Ma",
"Siwei Wang",
"Shengju Yu",
"Suyuan Liu",
"Jun-Jie Huang",
"Huijun Wu",
"Xinwang Liu",
"En Zhu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=TInX4ZyNZB | @inproceedings{
niu2024minet,
title={MiNet: Weakly-Supervised Camouflaged Object Detection through Mutual Interaction between Region and Edge Cues},
author={Yuzhen Niu and Lifen Yang and Rui Xu and Yuezhou Li and Yuzhong Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=TInX4ZyNZB}
} | Existing weakly-supervised camouflaged object detection (WSCOD) methods have much difficulty in detecting accurate object boundaries due to insufficient and imprecise boundary supervision in scribble annotations. Drawing inspiration from human perception that discerns camouflaged objects by incorporating both object region and boundary information, we propose a novel Mutual Interaction Network (MiNet) for scribble-based WSCOD to alleviate the detection difficulty caused by insufficient scribbles. The proposed MiNet facilitates mutual reinforcement between region and edge cues, thereby integrating more robust priors to enhance detection accuracy. In this paper, we first construct an edge cue refinement net, featuring a core region-aware guidance module (RGM) aimed at leveraging the extracted region feature as a prior to generate the discriminative edge map. By considering both object semantic and positional relationships between edge feature and region feature, RGM highlights the areas associated with the object in the edge feature. Subsequently, to tackle the inherent similarity between camouflaged objects and the surroundings, we devise a region-boundary refinement net. This net incorporates a core edge-aware guidance module (EGM), which uses the enhanced edge map from the edge cue refinement net as guidance to refine the object boundaries in an iterative and multi-level manner. Experiments on CAMO, CHAMELEON, COD10K, and NC4K datasets demonstrate that the proposed MiNet outperforms the state-of-the-art methods. | MiNet: Weakly-Supervised Camouflaged Object Detection through Mutual Interaction between Region and Edge Cues | [
"Yuzhen Niu",
"Lifen Yang",
"Rui Xu",
"Yuezhou Li",
"Yuzhong Chen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=TFueeF3aTo | @inproceedings{
zhang2024pixelfade,
title={PixelFade: Privacy-preserving Person Re-identification with Noise-guided Progressive Replacement},
author={Delong Zhang and Yi-Xing Peng and Xiao-Ming Wu and Ancong Wu and Wei-Shi Zheng},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=TFueeF3aTo}
} | Online person re-identification services face privacy breaches from potential data leaks and recovery attacks, exposing cloud-stored images to malicious attackers and triggering public concern. The privacy protection of pedestrian images is crucial. Previous privacy-preserving person re-identification methods are unable to resist recovery attacks and compromise accuracy. In this paper, we propose an iterative method (PixelFade) to optimize pedestrian images into noise-like images to resist recovery attacks. We first give an in-depth study of protected images from previous privacy methods, which reveal that the \textbf{chaos} of protected images can disrupt the learning of recovery networks, leading to a decrease in the power of the recovery attacks. Accordingly, we propose Noise-guided Objective Function with the feature constraints of a specific authorization model, optimizing pedestrian images to normal-distributed noise images while preserving their original identity information as per the authorization model. To solve the above non-convex optimization problem, we propose a heuristic optimization algorithm that alternately performs the Constraint Operation and the Partial Replacement operation. This strategy not only safeguards that original pixels are replaced with noises to protect privacy, but also guides the images towards an improved optimization direction to effectively preserve discriminative features. Extensive experiments demonstrate that our PixelFade outperforms previous methods in resisting recovery attacks and Re-ID performance. The code will be released. | PixelFade: Privacy-preserving Person Re-identification with Noise-guided Progressive Replacement | [
"Delong Zhang",
"Yi-Xing Peng",
"Xiao-Ming Wu",
"Ancong Wu",
"Wei-Shi Zheng"
] | Conference | poster | 2408.05543 | [
"https://github.com/isee-laboratory/pixelfade"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=TFFnsgu2Pr | @inproceedings{
huang2024roco,
title={RoCo: Robust Cooperative Perception By Iterative Object Matching and Pose Adjustment},
author={Zhe Huang and Shuo Wang and Yongcai Wang and Wanting Li and Deying Li and Lei Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=TFFnsgu2Pr}
} | Collaborative autonomous driving with multiple vehicles usually requires the data fusion from multiple modalities. To ensure effective fusion, the data from each individual modality shall maintain a reasonably high quality. However, in collaborative perception, the quality of object detection based on a modality is highly sensitive to the relative pose errors among the agents. It leads to feature misalignment and significantly reduces collaborative performance.
To address this issue, we propose RoCo, a novel unsupervised framework to conduct iterative object matching and agent pose adjustment. To the best of our knowledge, our work is the first to model the pose correction problem in collaborative perception as an object matching task, which reliably associates common objects detected by different agents; On top of this, we propose a graph optimization process to adjust the agent poses by minimizing the alignment errors of the associated objects, and the object matching is re-done based on the adjusted agent poses. This process is iteratively repeated until convergence. Experimental study on both simulated and real-world datasets demonstrates that the proposed framework RoCo consistently outperforms existing relevant methods in terms of the collaborative object detection performance, and exhibits highly desired robustness when the pose information of agents is with high-level noise. Ablation studies are also provide to show the impact of its key parameters and components. The code will be released. | RoCo: Robust Cooperative Perception By Iterative Object Matching and Pose Adjustment | [
"Zhe Huang",
"Shuo Wang",
"Yongcai Wang",
"Wanting Li",
"Deying Li",
"Lei Wang"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=TAVtkpjS9P | @inproceedings{
sun2024tdsd,
title={{TDSD}: Text-Driven Scene-Decoupled Weakly Supervised Video Anomaly Detection},
author={Shengyang Sun and Jiashen Hua and Junyi Feng and Dongxu Wei and Baisheng Lai and Xiaojin Gong},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=TAVtkpjS9P}
} | Video anomaly detection has garnered widespread attention in industry and academia in recent years due to its significant role in public security. However, many existing methods overlook the influence of scenes on anomaly detection. These methods simply label the occurrence of certain actions or objects as anomalous. In reality, scene context plays a crucial role in determining anomalies. For example, running on a highway is anomalous, while running on a playground is normal. Therefore, understanding the scene is essential for effective anomaly detection. In this work, we aim to address the challenge of scene-dependent weakly supervised video anomaly detection by decoupling scenes. Specifically, we propose a novel text-driven scene-decoupled (TDSD) framework, consisting of a TDSD module (TDSDM) and fine-grained visual augmentation (FVA) modules. The scene-decoupled module extracts semantic information from scenes, while the FVA module assists in fine-grained visual enhancement. We validate the effectiveness of our approach by constructing two scene-dependent datasets and achieve state-of-the-art results on scene-agnostic datasets as well. Code is available at https://github.com/shengyangsun/TDSD. | TDSD: Text-Driven Scene-Decoupled Weakly Supervised Video Anomaly Detection | [
"Shengyang Sun",
"Jiashen Hua",
"Junyi Feng",
"Dongxu Wei",
"Baisheng Lai",
"Xiaojin Gong"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=T6fuEVqCLS | @inproceedings{
zheng2024semisupervised,
title={Semi-supervised Visible-Infrared Person Re-identification via Modality Unification and Confidence Guidance},
author={Xiying Zheng and Yukang Zhang and Yang Lu and Hanzi Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=T6fuEVqCLS}
} | Semi-supervised visible-infrared person re-identification (SSVI-ReID) aims to match pedestrian images of the same identity from different modalities (visible and infrared) while only annotating visible images, which is highly related to multimedia and multi-modal processing.
Existing works primarily focus on assigning accurate pseudo-labels to infrared images but overlook the two key challenges: erroneous pseudo-labels and large modality discrepancy. To alleviate these issues, this paper proposes a novel Modality-Unified and Confidence-Guided (MUCG) semi-supervised learning framework. Specifically, we first propose a Dynamic Intermediate Modality Generation (DIMG) module, which transfers knowledge from labeled visible images to unlabeled infrared images, enhancing the pseudo-label quality and bridging the modality discrepancy. Meanwhile, we propose a Weighted Identification Loss (WIL) that can reduce the model's dependence on erroneous labels by using confidence weighting. Moreover, an effective Modality Consistency Loss (MCL) is proposed to narrow the distribution of visible and infrared features, further narrowing the modality discrepancy and enabling the learning of modality-unified features. Extensive experiments show that the proposed MUCG has significant advantages in improving the performance of the SSVI-ReID task, surpassing the current state-of-the-art methods by a significant margin.
The code will be available. | Semi-supervised Visible-Infrared Person Re-identification via Modality Unification and Confidence Guidance | [
"Xiying Zheng",
"Yukang Zhang",
"Yang Lu",
"Hanzi Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Suj4R2pKcq | @inproceedings{
xin2024robustface,
title={RobustFace: Adaptive Mining of Noise and Hard Samples for Robust Face Recognitions},
author={Yang Xin and Yu Zhou and Jianmin Jiang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Suj4R2pKcq}
} | While margin-based deep face recognition models, such as ArcFace and AdaFace, have achieved remarkable successes over the recent years, they may suffer from degraded performances when encountering training sets corrupted with noises. This is often inevitable when massively large scale datasets need to be dealt with, yet it remains difficult to construct clean enough face datasets under these circumstances. In this paper, we propose a robust deep face recognition model, RobustFace, by combining the advantages of margin-based learning models with the strength of mining-based approaches to effectively mitigate the impact of noises during trainings. Specifically, we introduce a noise-adaptive mining strategy to dynamically adjust the emphasis balance between hard and noise samples by monitoring the model's recognition performances at the batch level to provide optimization-oriented feedback, enabling direct training on noisy datasets without the requirement of pre-training. Extensive experiments validate that our proposed RobustFace achieves competitive performances in comparison with the existing SoTA models when trained with clean datasets. When trained with both real-world and synthetic noisy datasets, RobustFace significantly outperforms the existing models, especially when the synthetic noisy datasets are corrupted with both close-set and open-set noises. While the existing baseline models suffer from an average performance drop of around 40\%, under these circumstances, our proposed still delivers accuracy rates of more than 90\%. | RobustFace: Adaptive Mining of Noise and Hard Samples for Robust Face Recognitions | [
"Yang Xin",
"Yu Zhou",
"Jianmin Jiang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=SsVVrDheMH | @inproceedings{
lin2024icondm,
title={Icon{DM}: Text-Guided Icon Set Expansion Using Diffusion Models},
author={Jiawei Lin and Zhaoyun Jiang and Jiaqi Guo and Shizhao Sun and Ting Liu and Zijiang James Yang and Jian-Guang Lou and Dongmei Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=SsVVrDheMH}
} | Icons are ubiquitous visual elements in graphic design. However, their creation is non-trivial and time-consuming. To this end, we draw inspiration from the booming text-to-image field and propose Text-Guided Icon Set Expansion, a task that allows users to create novel and style-preserving icons using textual descriptions and a few handmade icons as style reference. Despite its usefulness, this task poses two unique challenges. (i) Abstract Concept Visualization. Abstract concepts like technology and health are frequently encountered in icon creation, but their visualization requires a mental grounding process that connects them to physical and easy-to-draw concepts. (ii) Fine-grained Style Transfer. Unlike ordinary images, icons exhibit far richer fine-grained stylistic elements, including tones, line widths, shapes, shadow effects, etc, setting a higher demand on capturing and preserving them during generation.
To address the challenges, we propose IconDM, a method based on pre-trained text-to-image (T2I) diffusion models. It involves a one-shot domain adaptation process and an online style transfer process. The domain adaptation aims to improve the pre-trained T2I model in understanding abstract concepts by finetuning on high-quality icon-text pairs. To achieve so, we construct IconBank, a large-scale dataset of 2.3 million icon-text pairs, where the texts are generated by the state-of-the-art vision-language model from icons. In style transfer, we introduce a Style Enhancement Mod- ule into the T2I model. It explicitly extracts the fine-grained style features from the given reference icons, and is jointly optimized with the T2I model during DreamBooth tuning. To assess IconDM, we present IconBench, a structured suite with 30 icon sets and 100 concepts (including 50 abstract concepts) for generation. Quantitative results, qualitative analysis, and extensive ablation studies demonstrate the effectiveness of IconDM. | IconDM: Text-Guided Icon Set Expansion Using Diffusion Models | [
"Jiawei Lin",
"Zhaoyun Jiang",
"Jiaqi Guo",
"Shizhao Sun",
"Ting Liu",
"Zijiang James Yang",
"Jian-Guang Lou",
"Dongmei Zhang"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Srt5NmAaW4 | @inproceedings{
he2024metadragonboat,
title={MetaDragonBoat: Exploring Paddling Techniques of Virtual Dragon Boating in a Metaverse Campus},
author={Wei He and Xiang Li and Shengtian Xu and Yuzheng Chen and SIO CHAN IN DEVIN and Ge lin and LIK-HANG LEE},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Srt5NmAaW4}
} | The preservation of cultural heritage, as mandated by the United Nations Sustainable Development Goals (SDGs), is integral to sustainable urban development. This paper focuses on the Dragon Boat Festival, a prominent event in Chinese cultural heritage, and proposes leveraging immersive technologies, particularly Virtual Reality (VR), to enhance its preservation and accessibility. Traditionally, participation in the festival's dragon boat races was limited to elite athletes, excluding broader demographics. Our proposed solution, named MetaDragonBoat, enables virtual participation in dragon boat racing, offering immersive experiences that replicate physical exertion through a cultural journey. Thus, we build a digital twin of a university campus located in a region with a rich dragon boat racing tradition. Coupled with three paddling techniques that are enabled by either commercial controllers or physical paddle controllers with haptic feedback, diversified users can engage in realistic rowing experiences. Our results demonstrate that by integrating resistance into the paddle controls, users could simulate the physical effort of dragon boat racing, promoting a deeper understanding and appreciation of this cultural heritage. | MetaDragonBoat: Exploring Paddling Techniques of Virtual Dragon Boating in a Metaverse Campus | [
"Wei He",
"Xiang Li",
"Shengtian Xu",
"Yuzheng Chen",
"SIO CHAN IN DEVIN",
"Ge lin",
"LIK-HANG LEE"
] | Conference | poster | 2408.04013 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=So38kD9wyR | @inproceedings{
ma2024bridging,
title={Bridging the Modality Gap: Dimension Information Alignment and Sparse Spatial Constraint for Image-Text Matching},
author={Xiang Ma and Xuemei Li and Lexin Fang and Caiming Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=So38kD9wyR}
} | Many contrastive learning based models have achieved advanced performance in image-text matching tasks. The key of these models lies in analyzing the correlation between image-text pairs, which involves cross-modal interaction of embeddings in corresponding dimensions. However, the embeddings of different modalities are from different models or modules, and there is a significant modality gap. Directly interacting such embeddings lacks rationality and may capture inaccurate correlation. Therefore, we propose a novel method called DIAS to bridge the modality gap from two aspects: (1) We align the information representation of embeddings from different modalities in corresponding dimension to ensure the correlation calculation is based on interactions of similar information. (2) The spatial constraints of inter- and intra-modalities unmatched pairs are introduced to ensure the effectiveness of semantic alignment of the model. Besides, a sparse correlation algorithm is proposed to select strong correlated spatial relationships, enabling the model to learn more significant features and avoid being misled by weak correlation. Extensive experiments demonstrate the superiority of DIAS, achieving 4.3\%-10.2\% rSum improvements on Flickr30k and MSCOCO benchmarks. | Bridging the Modality Gap: Dimension Information Alignment and Sparse Spatial Constraint for Image-Text Matching | [
"Xiang Ma",
"Xuemei Li",
"Lexin Fang",
"Caiming Zhang"
] | Conference | poster | 2410.16853 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=SlqoZ2gODs | @inproceedings{
lu2024voxeltrack,
title={VoxelTrack: Exploring Multi-level Voxel Representation for 3D Point Cloud Object Tracking},
author={Yuxuan Lu and Jiahao Nie and Zhiwei He and Hongjie Gu and Xudong Lv},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=SlqoZ2gODs}
} | Current LiDAR point cloud-based 3D single object tracking (SOT) methods typically rely on point-based representation network. Despite demonstrated success, such networks suffer from some fundamental problems: 1) It contains pooling operation to cope with inherently disordered point clouds, hindering the capture of 3D spatial information that is useful for tracking, a regression task. 2) The adopted set abstraction operation hardly handles density-inconsistent point clouds, also preventing 3D spatial information from being modeled. To solve these problems, we introduce a novel tracking framework, termed VoxelTrack. By voxelizing inherently disordered point clouds into 3D voxels and extracting their features via sparse convolution blocks, VoxelTrack effectively models precise and robust 3D spatial information, thereby guiding accurate position prediction for tracked objects. Moreover, VoxelTrack incorporates a dual-stream encoder with cross-iterative feature fusion module to further explore fine-grained 3D spatial information for tracking. Benefiting from accurate 3D spatial information being modeled, our VoxelTrack simplifies tracking pipeline with a single regression loss. Extensive experiments are conducted on three widely-adopted datasets including KITTI, NuScenes and Waymo Open Dataset. The experimental results confirm that VoxelTrack achieves state-of-the-art performance (88.3%, 71.4% and 63.6% mean precision on the three datasets, respectively), and outperforms the existing trackers with a real-time speed of 36 Fps on a single TITAN RTX GPU.
The source code and model will be released. | VoxelTrack: Exploring Multi-level Voxel Representation for 3D Point Cloud Object Tracking | [
"Yuxuan Lu",
"Jiahao Nie",
"Zhiwei He",
"Hongjie Gu",
"Xudong Lv"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=SeSdURMtX8 | @inproceedings{
peng2024rewite,
title={ReWiTe: Realistic Wide-angle and Telephoto Dual Camera Fusion Dataset via Beam Splitter Camera Rig},
author={Chunli Peng and Xuan Dong and Tiantian Cao and Zhengqing Li and Kun Dong and Weixin Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=SeSdURMtX8}
} | The fusion of images from dual camera systems featuring a wide-angle and a telephoto camera has become a hotspot problem recently. By integrating simultaneously captured wide-angle and telephoto images from these systems, the resulting fused image achieves a wide field of view (FOV) coupled with high-definition quality. Existing approaches are mostly deep learning methods, and predominantly rely on supervised learning, where the training dataset plays a pivotal role. However, current datasets typically adopt a data synthesis approach generate input pairs of wide-angle and telephoto images alongside ground-truth images. Notably, the wide-angle inputs are synthesized rather than captured using real wide-angle cameras, and the ground-truth image is captured by wide-angle camera whose quality is substantially lower than that of input telephoto images captured by telephoto cameras. To address these limitations, we introduce a novel hardware setup utilizing a beam splitter to simultaneously capture three images, i.e. input pairs and ground-truth images, from two authentic cellphones equipped with wide-angle and telephoto dual cameras. Specifically, the wide-angle and telephoto images captured by cellphone 2 serve as the input pair, while the telephoto image captured by cellphone 1, which is calibrated to match the optical path of the wide-angle image from cellphone 2, serves as the ground-truth image, maintaining quality on par with the input telephoto image. Experiments validate the efficacy of our newly introduced dataset, named ReWiTe, significantly enhances the performance of various existing methods for real-world wide-angle and telephoto dual image fusion tasks. | ReWiTe: Realistic Wide-angle and Telephoto Dual Camera Fusion Dataset via Beam Splitter Camera Rig | [
"Chunli Peng",
"Xuan Dong",
"Tiantian Cao",
"Zhengqing Li",
"Kun Dong",
"Weixin Li"
] | Conference | poster | 2404.10584 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=SYwhAoFoa6 | @inproceedings{
wang2024large,
title={Large Multi-modality Model Assisted {AI}-Generated Image Quality Assessment},
author={Puyi Wang and Wei Sun and Zicheng Zhang and Jun Jia and Yanwei Jiang and Zhichao Zhang and Xiongkuo Min and Guangtao Zhai},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=SYwhAoFoa6}
} | Traditional deep neural network (DNN)-based image quality assessment (IQA) models leverage convolutional neural networks (CNN) or Transformer to learn the quality-aware feature representation, achieving commendable performance on natural scene images. However, when applied to AI-Generated images (AGIs), these DNN-based IQA models exhibit subpar performance. This situation is largely due to the semantic inaccuracies inherent in certain AGIs caused by uncontrollable nature of the generation process. Thus, the capability to discern semantic content becomes crucial for assessing the quality of AGIs. Traditional DNN-based IQA models, constrained by limited parameter complexity and training data, struggle to capture complex fine-grained semantic features, making it challenging to grasp the existence and coherence of semantic content of the entire image. To address the shortfall in semantic content perception of current IQA models, we introduce a large ***M***ulti-modality model ***A***ssisted ***A***I-***G***enerated ***I***mage ***Q***uality ***A***ssessment (***MA-AGIQA***) model, which utilizes semantically informed guidance to sense semantic information and extract semantic vectors through carefully designed text prompts. Moreover, it employs a mixture of experts (MoE) structure to dynamically integrate the semantic information with the quality-aware features extracted by traditional DNN-based IQA models. Comprehensive experiments conducted on two AI-generated content datasets, AIGCQA-20k and AGIQA-3k show that MA-AGIQA achieves state-of-the-art performance, and demonstrate its superior generalization capabilities on assessing the quality of AGIs. The code will be available. | Large Multi-modality Model Assisted AI-Generated Image Quality Assessment | [
"Puyi Wang",
"Wei Sun",
"Zicheng Zhang",
"Jun Jia",
"Yanwei Jiang",
"Zhichao Zhang",
"Xiongkuo Min",
"Guangtao Zhai"
] | Conference | oral | 2404.17762 | [
"https://github.com/wangpuyi/ma-agiqa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=SWuns4mMsy | @inproceedings{
fang2024mtsnet,
title={{MTSN}et: Joint Feature Adaptation and Enhancement for Text-Guided Multi-view Martian Terrain Segmentation},
author={Yang Fang and Xuefeng Rao and Xinbo Gao and Weisheng Li and Min Zijian},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=SWuns4mMsy}
} | Martian terrain segmentation plays a crucial role in autonomous navigation and safe driving of Mars rovers as well as global analysis of Martian geological landforms. However, most deep learning-based segmentation models cannot effectively handle the challenges of highly unstructured and unbalanced terrain distribution on the Martian surface, thus leading to inadequate adaptability and generalization ability. In this paper, we propose a novel multi-view Martian Terrain Segmentation framework (MTSNet) by developing an efficient Martian Terrain text-Guided Segment Anything Model (MTG-SAM) and combining it with a tailored Local Terrain Feature Enhancement Network (LTEN) to capture intricate terrain details. Specifically, the proposed MTG-SAM is equipped with a Terrain Context attention Adapter Module (TCAM) to efficiently and effectively unleashing the model adaptability and transferability on Mars-specific terrain distribution.
Then, a Local Terrain Feature Enhancement Network (LTEN) is designated to compensate for the limitations of MTG-SAM in capturing the fine-grained local terrain features of Mars surface. Afterwards, a simple yet efficient Gated Fusion Module (GFM) is introduced to dynamically merge the global contextual features from MTG-SAM encoder and the local refined features from LTEN module for comprehensive terrain feature learning. Moreover, the proposed MTSNet enables terrain-specific text as prompts resolving the efficiency issue of existing methods that require costly annotation of bounding boxes or foreground points. Experimental results on AI4Mars and ConeQuest datasets demonstrate that our proposed MTSNet can effectively learns the unique Martian terrain feature distribution and achieves state-of-the-art performance on multi-view terrain segmentation from both the perspectives of the Mars rover and the satellite remote sensing. Code is available at https://github.com/raoxuefeng/mtsnet. | MTSNet: Joint Feature Adaptation and Enhancement for Text-Guided Multi-view Martian Terrain Segmentation | [
"Yang Fang",
"Xuefeng Rao",
"Xinbo Gao",
"Weisheng Li",
"Min Zijian"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=SWFpL4G2pf | @inproceedings{
yang2024spatiotemporal,
title={Spatiotemporal Fine-grained Video Description for Short Videos},
author={Te Yang and Jian Jia and Bo Wang and Yanhua cheng and Yan Li and Dongze Hao and Xipeng Cao and Quan Chen and Han Li and Peng Jiang and Xiangyu Zhu and Zhen Lei},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=SWFpL4G2pf}
} | In the mobile internet era, short videos are inundating people's lives. However, research on visual language models specifically designed for short videos has yet to be fully explored. Short videos are not just videos of limited duration. The prominent visual details and high information density of short videos differentiate them to long videos. In this paper, we propose the SpatioTemporal Fine-grained Description (STFVD) emphasizing on the uniqueness of short videos, which entails capturing the intricate details of the main subject and fine-grained movements. To this end, we create a comprehensive Short Video Advertisement Description (SVAD) dataset, comprising 34,930 clips from 5,046 videos. The dataset covers a range of topics, including 191 sub-industries, 649 popular products, and 470 trending games. Various efforts have been made in the data annotation process to ensure the inclusion of fine-grained spatiotemporal information, resulting in 34,930 high-quality annotations. Compared to existing datasets, samples in SVAD exhibit a superior text information density, suggesting that SVAD is more appropriate for the analysis of short videos. Based on the SVAD dataset, we develop SVAD-VLM to generate spatiotemporal fine-grained description for short videos. We use a prompt-guided keyword generation task approach to efficiently learn key visual information. Moreover, we also utilize dual visual alignment to exploit the advantage of mixed-datasets training. Experiments on SVAD dataset demonstrate the challenge of STFVD and the competitive performance of proposed method compared to previous ones. | Spatiotemporal Fine-grained Video Description for Short Videos | [
"Te Yang",
"Jian Jia",
"Bo Wang",
"Yanhua cheng",
"Yan Li",
"Dongze Hao",
"Xipeng Cao",
"Quan Chen",
"Han Li",
"Peng Jiang",
"Xiangyu Zhu",
"Zhen Lei"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=SUvowhclym | @inproceedings{
zheng2024a,
title={A Unified Understanding of Adversarial Vulnerability Regarding Unimodal Models and Vision-Language Pre-training Models},
author={Haonan Zheng and Xinyang Deng and Wen Jiang and Wenrui Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=SUvowhclym}
} | With Vision-Language Pre-training (VLP) models demonstrating powerful multimodal interaction capabilities, the application scenarios of neural networks is no longer confined to unimodal domains such as CV and NLP, but has expanded to more complex multimodal V+L downstream tasks. The security vulnerabilities of unimodal models have been extensively examined, whereas those of VLP models remain challenges. We note that in CV models, the understanding of images comes from annotated information, while VLP models is designed to learn image representations directly from raw text. Motivated by this discrepancy, we developed the Feature Guidance Attack (FGA), a novel method that uses text representations to direct the perturbation of clean images, resulting in the generation of adversarial images. FGA is orthogonal to many advanced attack strategies in the unimodal domain, facilitating the direct application of rich research findings from the unimodal to multimodal scenario. By appropriately introducing text attack into FGA, we construct Feature Guidance with Text Attack (FGA-T). Through the interaction of attacking two modalities, FGA-T achieves superior attack effects against VLP models. Moreover, incorporating data augmentation and momentum mechanisms significantly improves black-box transferability of FGA-T. Our method demonstrates stable and effective attack capabilities across various datasets, downstream tasks, and both black-box and white-box settings, offering a unified baseline for exploring the robustness of VLP models. | A Unified Understanding of Adversarial Vulnerability Regarding Unimodal Models and Vision-Language Pre-training Models | [
"Haonan Zheng",
"Xinyang Deng",
"Wen Jiang",
"Wenrui Li"
] | Conference | oral | 2407.17797 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=SSY5F8jBej | @inproceedings{
jiang2024hunting,
title={Hunting Blemishes: Language-guided High-fidelity Face Retouching Transformer with Limited Paired Data},
author={Le Jiang and Yan Huang and Lianxin Xie and Wen Xue and Cheng Liu and Si Wu and Hau-San Wong},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=SSY5F8jBej}
} | The prevalence of multimedia applications has led to increased concerns and demand for auto face retouching. Face retouching aims to enhance portrait quality by removing blemishes. However, the existing auto-retouching methods rely heavily on a large amount of paired training samples, and perform less satisfactorily when handling complex and unusual blemishes. To address this issue, we propose a Language-guided Blemish Removal Transformer for automatically retouching face images, while at the same time reducing the dependency of the model on paired training data. Our model is referred to as LangBRT, which leverages vision-language pre-training for precise facial blemish removal. Specifically, we design a text-prompted blemish detection module that indicates the regions to be edited. The priors not only enable the transformer network to handle specific blemishes in certain areas, but also reduce the reliance on retouching training data. Further, we adopt a target-aware cross attention mechanism, such that the blemish regions are edited accurately while at the same time maintaining the normal skin regions unchanged. Finally, we adopt a regularization approach to encourage the semantic consistency between the synthesized image and the text description of the desired retouching outcome. Extensive experiments are performed to demonstrate the superior performance of LangBRT over competing auto-retouching methods in terms of dependency on training data, blemish detection accuracy and synthesis quality. | Hunting Blemishes: Language-guided High-fidelity Face Retouching Transformer with Limited Paired Data | [
"Le Jiang",
"Yan Huang",
"Lianxin Xie",
"Wen Xue",
"Cheng Liu",
"Si Wu",
"Hau-San Wong"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=SRZFOWf3lN | @inproceedings{
liu2024two,
title={Two Teachers Are Better Than One: Semi-supervised Elliptical Object Detection by Dual-Teacher Collaborative Guidance},
author={Yu Liu and Longhan Feng and Qi Jia and Zezheng Liu and Zihuang Cao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=SRZFOWf3lN}
} | Elliptical Object Detection (EOD) is crucial yet challenging due to complex scenes and varying object characteristics. Existing methods often struggle with parameter configurations and lack adaptability in label-scarce scenarios. To address this, a new semi-supervised teacher-student framework, Dual-Teacher Collaborative Guidance (DTCG), is proposed, comprising a five-parameter teacher detector, a six-parameter teacher detector, and a student detector. This allows the two teachers, specializing in different regression approaches, to co-instruct the student within a unified model, preventing errors and enhancing performance. Additionally, a feature correlation module (FCM) highlights differences between teacher features and employs deformable convolution to select advantageous features for final parameter regression. A collaborative training strategy (CoT) updates the teachers asynchronously, breaking through training and performance bottlenecks. Extensive experiments conducted on two widely recognized datasets affirm the superior performance of our DTCG over other leading competitors across various semi-supervised scenarios. Notably, our method achieves a 5.61% higher performance than the second best method when utilizing only 10% annotated data. | Two Teachers Are Better Than One: Semi-supervised Elliptical Object Detection by Dual-Teacher Collaborative Guidance | [
"Yu Liu",
"Longhan Feng",
"Qi Jia",
"Zezheng Liu",
"Zihuang Cao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=SQreBrpK2c | @inproceedings{
wu2024rainmamba,
title={RainMamba: Enhanced Locality Learning with State Space Models for Video Deraining},
author={Hongtao Wu and Yijun Yang and Huihui Xu and Weiming Wang and JINNI ZHOU and Lei Zhu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=SQreBrpK2c}
} | The outdoor vision systems are frequently contaminated by rain streaks and raindrops, which significantly degenerate the performance of visual tasks and multimedia applications. The nature of videos exhibits redundant temporal cues for rain removal with higher stability. Traditional video deraining methods heavily rely on optical flow estimation and kernel-based manners, which have a limited receptive field. Yet, transformer architectures, while enabling long-term dependencies, bring about a significant increase in computational complexity. Recently, the linear-complexity operator of the state space models (SSMs) has contrarily facilitated efficient long-term temporal modeling, which is crucial for rain streaks and raindrops removal in videos. Unexpectedly, its uni-dimensional sequential process on videos destroys the local correlations across the spatio-temporal dimension by distancing adjacent pixels. To address this, we present an improved SSMs-based video deraining network (RainMamba) with a novel Hilbert scanning mechanism to better capture sequence-level local information. We also introduce a difference-guided dynamic contrastive locality learning strategy to enhance the patch-level self-similarity learning ability of the proposed network. Extensive experiments on four synthesized video deraining datasets and real-world rainy videos demonstrate the superiority of our network in the removal of rain streaks and raindrops. Our code and results are available at https://github.com/TonyHongtaoWu/RainMamba. | RainMamba: Enhanced Locality Learning with State Space Models for Video Deraining | [
"Hongtao Wu",
"Yijun Yang",
"Huihui Xu",
"Weiming Wang",
"JINNI ZHOU",
"Lei Zhu"
] | Conference | oral | 2407.21773 | [
"https://github.com/TonyHongtaoWu/RainMamba"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=SNwk8uqmqs | @inproceedings{
guo2024prtgs,
title={{PRTGS}: Precomputed Radiance Transfer of Gaussian Splats for Real-Time High-Quality Relighting},
author={Yijia Guo and Yuanxi Bai and Liwen Hu and Guo Ziyi and Mianzhi Liu and Yu Cai and Tiejun Huang and Lei Ma},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=SNwk8uqmqs}
} | We proposed Precomputed Radiance Transfer of Gaussian Splats (PRTGS), a real-time high-quality relighting method for Gaussian splats in low-frequency lighting environments that captures soft shadows and interreflections by precomputing 3D Gaussian splats' radiance transfer.
Existing studies have demonstrated that 3D Gaussian splatting (3DGS) outperforms neural fields in efficiency for dynamic lighting scenarios. However, the current relighting method based on 3DGS still struggling in computing high-quality shadow and indirect illumination in real time for dynamic light, leading to unrealistic rendering results. We solve this problem by precomputing the expensive transport simulations required for complex transfer functions like shadowing, the resulting transfer functions are represented as dense sets of vectors or matrices for every Gaussian splat. We introduce distinct precomputing methods tailored for training and rendering stages, along with unique ray tracing and indirect lighting precomputation techniques for 3D Gaussian splats to accelerate training speed and compute accurate indirect lighting related to environment light. Experimental analyses demonstrate that our approach achieves state-of-the-art visual quality while maintaining competitive training times and importantly allows high-quality real-time (30+ fps) relighting for dynamic light and relatively complex scenes at 1080p resolution. | PRTGS: Precomputed Radiance Transfer of Gaussian Splats for Real-Time High-Quality Relighting | [
"Yijia Guo",
"Yuanxi Bai",
"Liwen Hu",
"Guo Ziyi",
"Mianzhi Liu",
"Yu Cai",
"Tiejun Huang",
"Lei Ma"
] | Conference | poster | 2408.03538 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=SMnhGjIAPO | @inproceedings{
xiang2024adapmtl,
title={Adap{MTL}: Adaptive Pruning Framework for Multitask Learning Model},
author={Mingcan Xiang and Steven Jiaxun Tang and Qizheng Yang and Hui Guan and Tongping Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=SMnhGjIAPO}
} | In the domain of multimedia and multimodal processing, the efficient handling of diverse data streams—such as images, video, and sensor data—is paramount. Model compression and multitask learning (MTL) are crucial in this field, offering the potential to address the resource-intensive demands of processing and interpreting multiple forms of media simultaneously. However, effectively compressing a multitask model presents significant challenges due to the complexities of balancing sparsity allocation and accuracy performance across multiple tasks. To tackle the challenges, we propose AdapMTL, an adaptive pruning framework for MTL models. AdapMTL leverages multiple learnable soft thresholds independently assigned to the shared backbone and the task-specific heads to capture the nuances in different components' sensitivity to pruning. During training, it co-optimizes the soft thresholds and MTL model weights to automatically determine the suitable sparsity level at each component in order to achieve both high task accuracy and high overall sparsity. It further incorporates an adaptive weighting mechanism that dynamically adjusts the importance of task-specific losses based on each task's robustness to pruning. We demonstrate the effectiveness of AdapMTL through comprehensive experiments on popular multitask datasets, namely NYU-v2 and Tiny-Taskonomy, with different architectures, showcasing superior performance compared to state-of-the-art pruning methods. | AdapMTL: Adaptive Pruning Framework for Multitask Learning Model | [
"Mingcan Xiang",
"Steven Jiaxun Tang",
"Qizheng Yang",
"Hui Guan",
"Tongping Liu"
] | Conference | poster | 2408.03913 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=SMOUQtEaAf | @inproceedings{
wang2024whitebox,
title={White-box Multimodal Jailbreaks Against Large Vision-Language Models},
author={RUOFAN WANG and Xingjun Ma and Hanxu Zhou and Chuanjun Ji and Guangnan Ye and Yu-Gang Jiang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=SMOUQtEaAf}
} | Recent advancements in Large Vision-Language Models (VLMs) have underscored their superiority in various multimodal tasks. However, the adversarial robustness of VLMs has not been fully explored. Existing methodologies mainly assess robustness through unimodal adversarial attacks that perturb images, while assuming inherent resilience against text-based attacks. In contrast, our methodology adopts a comprehensive strategy that jointly attacks both text and image modalities to exploit a broader spectrum of vulnerability within VLMs. Furthermore, we propose a dual optimization objective aimed at guiding the model to generate affirmative responses with high toxicity. Specifically, we begin by optimizing an adversarial image prefix from random noise to generate diverse harmful responses in the absence of text input, thus imbuing the image with toxic semantics. Subsequently, an adversarial text suffix is integrated and co-optimized with the adversarial image prefix to maximize the probability of eliciting affirmative responses to various harmful instructions. The discovered adversarial image prefix and text suffix are collectively denoted as a Universal Master Key (UMK). When integrated into various malicious queries, UMK can circumvent the alignment defenses of VLMs and lead to the generation of objectionable content, known as jailbreaks. The experimental results demonstrate that our universal attack strategy can effectively jailbreak MiniGPT-4 with a 96% success rate, highlighting the fragility of VLMs and the exigency for new alignment strategies. | White-box Multimodal Jailbreaks Against Large Vision-Language Models | [
"RUOFAN WANG",
"Xingjun Ma",
"Hanxu Zhou",
"Chuanjun Ji",
"Guangnan Ye",
"Yu-Gang Jiang"
] | Conference | poster | 2405.17894 | [
"https://github.com/roywang021/UMK"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=SA1B0Lfxse | @inproceedings{
zhang2024towards,
title={Towards Robust Physical-world Backdoor Attacks on Lane Detection},
author={Xinwei Zhang and Aishan Liu and Tianyuan Zhang and Siyuan Liang and Xianglong Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=SA1B0Lfxse}
} | Deep learning-based lane detection (LD) plays a critical role in autonomous driving systems, such as adaptive cruise control. However, it is vulnerable to backdoor attacks. Existing backdoor attack methods on LD exhibit limited effectiveness in dynamic real-world scenarios, primarily because they fail to consider dynamic scene factors, including changes in driving perspectives (e.g., viewpoint transformations) and environmental conditions (e.g., weather or lighting changes). To tackle this issue, this paper introduces BadLANE, a dynamic scene adaptation backdoor attack for LD designed to withstand changes in real-world dynamic scene factors. To address the challenges posed by changing driving perspectives, we propose an amorphous trigger pattern composed of shapeless pixels. This trigger design allows the backdoor to be activated by various forms or shapes of mud spots or pollution on the road or lens, enabling adaptation to changes in vehicle observation viewpoints during driving. To mitigate the effects of environmental changes, we design a meta-learning framework to train meta-generators tailored to different environmental conditions. These generators produce meta-triggers that incorporate diverse environmental information, such as weather or lighting conditions, as the initialization of the trigger patterns for backdoor implantation, thus enabling adaptation to dynamic environments. Extensive experiments on various commonly used LD models in both digital and physical domains validate the effectiveness of our attacks, outperforming other baselines significantly (+25.15\% on average in Attack Success Rate). Our code is available on the anonymous website. | Towards Robust Physical-world Backdoor Attacks on Lane Detection | [
"Xinwei Zhang",
"Aishan Liu",
"Tianyuan Zhang",
"Siyuan Liang",
"Xianglong Liu"
] | Conference | poster | 2405.05553 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=S7pieMItch | @inproceedings{
hu2024mplugpaperowl,
title={m{PLUG}-PaperOwl: Scientific Diagram Analysis with the Multimodal Large Language Model},
author={Anwen Hu and Yaya Shi and Haiyang Xu and Jiabo Ye and Qinghao Ye and Ming Yan and Chenliang Li and Qi Qian and Ji Zhang and Fei Huang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=S7pieMItch}
} | Recently, the strong text creation ability of Large Language Models(LLMs) has given rise to many tools for assisting paper reading or even writing. However, the weak diagram analysis abilities of LLMs or Multimodal LLMs greatly limit their application scenarios, especially for scientific academic paper writing. In this work, towards a more versatile copilot for academic paper writing, we mainly focus on strengthening the multi-modal diagram analysis ability of Multimodal LLMs. By parsing Latex source files of high-quality papers, we carefully build a multi-modal diagram understanding dataset M-Paper. By aligning diagrams in the paper with related paragraphs, we construct professional diagram analysis samples for training and evaluation. M-Paper is the first dataset to support joint comprehension of multiple scientific diagrams, including figures and tables in the format of images or Latex codes. Besides, to better align the copilot with the user's intention, we introduce the `outline' as the control signal, which could be directly given by the user or revised based on auto-generated ones. Comprehensive experiments with a state-of-the-art Multimodal LLM demonstrate that training on our dataset shows stronger scientific diagram understanding performance, including diagram captioning, diagram analysis, and outline recommendation. The dataset, code, and model will be publicly available. | mPLUG-PaperOwl: Scientific Diagram Analysis with the Multimodal Large Language Model | [
"Anwen Hu",
"Yaya Shi",
"Haiyang Xu",
"Jiabo Ye",
"Qinghao Ye",
"Ming Yan",
"Chenliang Li",
"Qi Qian",
"Ji Zhang",
"Fei Huang"
] | Conference | poster | 2311.18248 | [
"https://github.com/x-plug/mplug-docowl"
] | https://huggingface.co/papers/2311.18248 | 1 | 0 | 0 | 10 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=S6fd7X47Ra | @inproceedings{
luo2024dualview,
title={Dual-view Pyramid Network for Video Frame Interpolation},
author={Yao Luo and Ming Yang and Jinhui Tang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=S6fd7X47Ra}
} | Video frame interpolation is a critical component of video streaming, a vibrant research area dealing with requests of both service providers and users. However, most existing methods cannot handle changing video resolutions while improving user perceptual quality. We aim to unleash the multifaceted knowledge yielded by the hierarchical views at multiple scales in a pyramid network. Specifically, we build a dual-view pyramid network by introducing pyramidal dual-view correspondence matching. It compels each scale to actively seek knowledge in view of both the current scale and a coarser scale, conducting robust correspondence matching by considering neighboring scales. Meanwhile, an auxiliary multi-scale collaborative supervision is devised to enforce the exchange of knowledge among current scale and a finer scale and thus reduce error propagation from coarse to fine scales. Based on the robust video dynamic caption of pyramidal dual-view correspondence matching, we further develop a pyramidal refinement module that formulates frame refinement as progressive latent representation generations by developing flow-guided cross-scale attention for feature fusion among neighboring frames. The proposed method achieves favorable performance on several benchmarks of varying video resolutions with better user perceptual quality and a relatively compact model size. | Dual-view Pyramid Network for Video Frame Interpolation | [
"Yao Luo",
"Ming Yang",
"Jinhui Tang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=S3lhjHLPl7 | @inproceedings{
chen2024magic,
title={Magic Clothing: Controllable Garment-Driven Image Synthesis},
author={Weifeng Chen and Tao Gu and Yuhao Xu and Arlene Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=S3lhjHLPl7}
} | We propose Magic Clothing, a latent diffusion model (LDM)-based network architecture for an unexplored garment-driven image synthesis task. Aiming at generating customized characters wearing the target garments with diverse text prompts, the image controllability is the most critical issue, i.e., to preserve the garment details and maintain faithfulness to the text prompts. To this end, we introduce a garment extractor to capture the detailed garment features, and employ self-attention fusion to incorporate them into the pretrained LDMs, ensuring that the garment details remain unchanged on the target character. Then, we leverage the joint classifier-free guidance to balance the control of garment features and text prompts over the generated results. Meanwhile, the proposed garment extractor is a plug-in module applicable to various finetuned LDMs, and it can be combined with other extensions like ControlNet and IP-Adapter to enhance the diversity and controllability of the generated characters. Furthermore, we design Matched-Points-LPIPS (MP-LPIPS), a robust metric for evaluating the consistency of the target image to the source garment. Extensive experiments demonstrate that our Magic Clothing achieves state-of-the-art results under various conditional controls for garment-driven image synthesis. Our source code is publicly available (for the review process, please refer to our supplementary material). | Magic Clothing: Controllable Garment-Driven Image Synthesis | [
"Weifeng Chen",
"Tao Gu",
"Yuhao Xu",
"Arlene Chen"
] | Conference | poster | 2404.09512 | [
"https://github.com/shinechen1024/magicclothing"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=RyPLu4tzU6 | @inproceedings{
jiang2024seds,
title={{SEDS}: Semantically Enhanced Dual-Stream Encoder for Sign Language Retrieval},
author={Longtao Jiang and Min Wang and Zecheng Li and Yao Fang and Wengang Zhou and Houqiang Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=RyPLu4tzU6}
} | Sign language retrieval, as an emerging visual-language task, has received widespread attention. Different from traditional video retrieval, it is more biased towards understanding the semantic information of human actions contained in video clips. Previous works typically only encode RGB videos to obtain high-level semantic features, resulting in local action details drowned in a large amount of visual information redundancy. Furthermore, existing RGB-based sign retrieval works suffer from the huge memory cost of dense visual data embedding in end-to-end training, and adopt offline RGB encoder instead, leading to suboptimal feature representation. To address these issues, we propose a novel sign language representation framework called Semantically Enhanced Dual-Stream Encoder (SEDS), which integrates Pose and RGB modalities to represent the local and global information of sign language videos. Specifically, the Pose encoder embeds the coordinates of keypoints corresponding to human joints, effectively capturing detailed action features. For better context-aware fusion of two video modalities, we propose a Cross Gloss Attention Fusion (CGAF) module to aggregate the adjacent clip features with similar semantic information from intra-modality and inter-modality. Moreover, a Pose-RGB Fine-grained Matching Objective is developed to enhance the aggregated fusion feature by contextual matching of fine-grained dual-stream features. Besides the offline RGB encoder, the whole framework only contains learnable lightweight networks, which can be trained end-to-end. Extensive experiments demonstrate that our framework significantly outperforms state-of-the-art methods on How2Sign, PHOENIX-2014T, and CSL-Daily datasets. | SEDS: Semantically Enhanced Dual-Stream Encoder for Sign Language Retrieval | [
"Longtao Jiang",
"Min Wang",
"Zecheng Li",
"Yao Fang",
"Wengang Zhou",
"Houqiang Li"
] | Conference | poster | 2407.16394 | [
"https://github.com/longtaojiang/seds"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=RwKFiLb9Ez | @inproceedings{
lin2024suppressing,
title={Suppressing Uncertainties in Degradation Estimation for Blind Super-Resolution},
author={Junxiong Lin and Zeng Tao and Xuan Tong and Xinji Mai and Haoran Wang and Boyang Wang and Yan Wang and Qing Zhao and Jiawen Yu and Yuxuan Lin and Shaoqi Yan and Shuyong Gao and Wenqiang Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=RwKFiLb9Ez}
} | The problem of blind image super-resolution aims to recover high-resolution (HR) images from low-resolution (LR) images with unknown degradation modes. Most existing methods model the image degradation process using blur kernels. However, this explicit modeling approach struggles to cover the complex and varied degradation processes encountered in the real world, such as high-order combinations of JPEG compression, blur, and noise. Implicit modeling for the degradation process can effectively overcome this issue, but a key challenge of implicit modeling is the lack of accurate ground truth labels for the degradation process to conduct supervised training. To overcome this limitations inherent in implicit modeling, we propose an \textbf{U}ncertainty-based degradation representation for blind \textbf{S}uper-\textbf{R}esolution framework (\textbf{USR}). By suppressing the uncertainty of local degradation representations in images, USR facilitated self-supervised learning of degradation representations. The USR consists of two components: Adaptive Uncertainty-Aware Degradation Extraction (AUDE) and a feature extraction network composed of Variable Depth Dynamic Convolution (VDDC) blocks. To extract Uncertainty-based Degradation Representation from LR images, the AUDE utilizes the Self-supervised Uncertainty Contrast module with Uncertainty Suppression Loss to suppress the inherent model uncertainty of the Degradation Extractor. Furthermore, VDDC block integrates degradation information through dynamic convolution. Rhe VDDC also employs an Adaptive Intensity Scaling operation that adaptively adjusts the degradation representation according to the network hierarchy, thereby facilitating the effective integration of degradation information. Quantitative and qualitative experiments affirm the superiority of our approach. | Suppressing Uncertainties in Degradation Estimation for Blind Super-Resolution | [
"Junxiong Lin",
"Zeng Tao",
"Xuan Tong",
"Xinji Mai",
"Haoran Wang",
"Boyang Wang",
"Yan Wang",
"Qing Zhao",
"Jiawen Yu",
"Yuxuan Lin",
"Shaoqi Yan",
"Shuyong Gao",
"Wenqiang Zhang"
] | Conference | poster | 2406.16459 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=RqI2i7HEJ8 | @inproceedings{
tang2024lidarnerf,
title={Li{DAR}-Ne{RF}: Novel Li{DAR} View Synthesis via Neural Radiance Fields},
author={Tao Tang and Longfei Gao and Guangrun Wang and Yixing Lao and Peng Chen and Hengshuang Zhao and Dayang Hao and Xiaodan Liang and Mathieu Salzmann and Kaicheng Yu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=RqI2i7HEJ8}
} | We introduce a new task, novel view synthesis for LiDAR sensors. While traditional model-based LiDAR simulators with style-transfer neural networks can be applied to render novel views, they fall short of producing accurate and realistic LiDAR patterns because the renderers rely on explicit 3D reconstruction and exploit game engines, that ignore important attributes of LiDAR points. We address this challenge by formulating, to the best of our knowledge, the first differentiable end-to-end LiDAR rendering framework, LiDAR-NeRF, leveraging a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points. However, simply employing NeRF cannot achieve satisfactory results, as it only focuses on learning individual pixels while ignoring local information, especially at low texture areas, resulting in poor geometry. To this end, we have taken steps to address this issue by introducing a structural regularization method to preserve local structural details. To evaluate the effectiveness of our approach, we establish an object-centric multi-view LiDAR dataset, dubbed NeRF-MVL. It contains observations of objects from 9 categories seen from 360-degree viewpoints captured with multiple LiDAR sensors. Our extensive experiments on the scene-level KITTI-360 dataset, and on our object-level NeRF-MVL show that our LiDAR-NeRF surpasses the model-based algorithms significantly. | LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields | [
"Tao Tang",
"Longfei Gao",
"Guangrun Wang",
"Yixing Lao",
"Peng Chen",
"Hengshuang Zhao",
"Dayang Hao",
"Xiaodan Liang",
"Mathieu Salzmann",
"Kaicheng Yu"
] | Conference | oral | 2304.10406 | [
"https://github.com/tangtaogo/lidar-nerf"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=RjZeXdKwRE | @inproceedings{
devanathan2024seeing,
title={Seeing Beyond Words: Multimodal Aspect-Level Complaint Detection in Ecommerce Videos},
author={Rishikesh Devanathan and APOORVA SINGH and A.S. Poornash and Sriparna Saha},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=RjZeXdKwRE}
} | Complaints are pivotal expressions within e-commerce communication, yet the intricate nuances of human interaction present formidable challenges for AI agents to grasp comprehensively. While recent attention has been drawn to analyzing complaints within a multimodal context, relying solely on text and images is insufficient for organizations. The true value lies in the ability to pinpoint complaints within the intricate structures of discourse, scrutinizing them at a granular aspect level. Our research delves into the discourse structure of e-commerce video-based product reviews, pioneering a novel task we term Aspect-Level Complaint Detection from Discourse (ACDD). Embedded in a multimodal framework, this task entails identifying aspect categories and assigning complaint/non-complaint labels at a nuanced aspect level. To facilitate this endeavour, we have curated a unique multimodal product review dataset, meticulously annotated at the utterance level with aspect categories and associated complaint labels.
To support this undertaking, we introduce a Multimodal Aspect-Aware Complaint Analysis (MAACA) model that incorporates a novel pre-training strategy and a global feature fusion technique across the three modalities. Additionally, the proposed framework leverages a moment retrieval step to identify the relevant portion of the clip, crucial for accurately detecting the fine-grained aspect categories and conducting aspect-level complaint detection. Extensive experiments conducted on the proposed dataset showcase that our framework outperforms unimodal and bimodal baselines, offering valuable insights into the application of video-audio-text representation learning frameworks for downstream tasks. | Seeing Beyond Words: Multimodal Aspect-Level Complaint Detection in Ecommerce Videos | [
"Rishikesh Devanathan",
"APOORVA SINGH",
"A.S. Poornash",
"Sriparna Saha"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=RhpGGXDg8S | @inproceedings{
qian2024clusterphys,
title={Cluster-Phys: Facial Clues Clustering Towards Efficient Remote Physiological Measurement},
author={Wei Qian and Kun Li and Dan Guo and Bin Hu and Meng Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=RhpGGXDg8S}
} | Remote photoplethysmography (rPPG) measurement aims to estimate physiological signals by analyzing subtle skin color changes induced by heartbeats in facial videos. Existing methods primarily rely on the fundamental video frame features or vanilla facial ROI (region of interest) features. Recognizing the varying light absorption and reactions of different facial regions over time, we adopt a new perspective to conduct a more fine-grained exploration of the key clues present in different facial regions within each frame and across temporal frames. Concretely, we propose a novel clustering-driven remote physiological measurement framework called Cluster-Phys, which employs a facial ROI prototypical clustering module to adaptively cluster the representative facial ROI features as facial prototypes and then update facial prototypes with highly semantic correlated base ROI features. In this way, our approach can mine facial clues from a more compact and informative prototype level rather than the conventional video/ROI level. Furthermore, we also propose a spatial-temporal prototype interaction module to learn facial prototype correlation from both spatial (across prototypes) and temporal (within prototype) perspectives. Extensive experiments are conducted on both intra-dataset and cross-dataset tests. The results show that our Cluster-Phys achieves significant performance improvement with less computation consumption. The source code will be available at https://github.com/VUT-HFUT/ClusterPhys. | Cluster-Phys: Facial Clues Clustering Towards Efficient Remote Physiological Measurement | [
"Wei Qian",
"Kun Li",
"Dan Guo",
"Bin Hu",
"Meng Wang"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=RYtwNywxPf | @inproceedings{
zhou2024pair,
title={{PAIR}: Pre-denosing Augmented Image Retrieval Model for Defending Adversarial Patches},
author={Ziyang Zhou and Pinghui Wang and Zi Liang and Ruofei Zhang and Haitao Bai},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=RYtwNywxPf}
} | Deep neural networks are widely used in retrieval systems. However, they are notoriously vulnerable to attack. Among the various forms of adversarial attacks, the patch attack is one of the most threatening forms. This type of attack can introduce cognitive biases into the retrieval system by inserting deceptive patches into images. Despite the seriousness of this threat, there are still no well-established solutions for image retrieval systems. In this paper, we propose the Pre-denosing Augmented Image Retrieval (PAIR) model, a new approach designed to protect image retrieval systems against adversarial patch attacks. The core strategy of PAIR is to dynamically and randomly reconstruct entire images based on their semantic content. This purifies well-designed patch attacks while preserving the semantic integrity of the images. Furthermore, we present a novel training strategy that incorporates a semantic discriminator. This discriminator significantly improves PAIR's ability to capture real semantics and reconstruct images. Experiments show that PAIR significantly outperforms existing defense methods. It effectively reduces the success rate of two state-of-the-art patch attack methods to below 5\%, achieving a 14\% improvement over current leading methods. Moreover, in defending against other forms of attack, such as global perturbation attacks, PAIR also achieves competitive results. The codes are available at: https://anonymous.4open.science/r/PAIR-8FD2. | PAIR: Pre-denosing Augmented Image Retrieval Model for Defending Adversarial Patches | [
"Ziyang Zhou",
"Pinghui Wang",
"Zi Liang",
"Ruofei Zhang",
"Haitao Bai"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=RTlMbI4OSg | @inproceedings{
wei2024exploring,
title={Exploring the Use of Abusive Generative {AI} Models on Civitai},
author={Yiluo Wei and Yiming Zhu and Pan Hui and Gareth Tyson},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=RTlMbI4OSg}
} | The rise of generative AI is transforming the landscape of digital imagery, and exerting a significant influence on online creative communities. This has led to the emergence of AI-Generated Content (AIGC) social platforms, such as Civitai. These distinctive social platforms allow users to build and share their own generative AI models, thereby enhancing the potential for more diverse artistic expression. Designed in the vein of social networks, they also provide artists with the means to showcase their creations (generated from the models), engage in discussions, and obtain feedback, thus nurturing a sense of community. Yet, this openness also raises concerns about the abuse of such platforms, e.g., using models to disseminate deceptive deepfakes or infringe upon copyrights. To explore this, we conduct the first comprehensive empirical study of an AIGC social platform, focusing on its use for generating abusive content. As an exemplar, we construct a comprehensive dataset covering Civitai, the largest available AIGC social platform. Based on this dataset of 87K models and 2M images, we explore the characteristics of content and discuss strategies for moderation to better govern these platforms. | Exploring the Use of Abusive Generative AI Models on Civitai | [
"Yiluo Wei",
"Yiming Zhu",
"Pan Hui",
"Gareth Tyson"
] | Conference | poster | 2407.12876 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=RSFpL35nIq | @inproceedings{
guo2024xprompt,
title={X-Prompt: Multi-modal Visual Prompt for Video Object Segmentation},
author={Pinxue Guo and Wanyun Li and Hao Huang and Lingyi Hong and Xinyu Zhou and Zhaoyu Chen and Jinglun Li and Kaixun Jiang and Wei Zhang and Wenqiang Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=RSFpL35nIq}
} | Multi-modal Video Object Segmentation (VOS), including RGB-Thermal, RGB-Depth, and RGB-Event, has garnered attention due to its capability to address challenging scenarios where traditional VOS methods struggle, such as extreme illumination, rapid motion, and background distraction. Existing approaches often involve designing specific additional branches and performing full-parameter fine-tuning for fusion in each task. However, this approach not only duplicates research efforts and hardware costs but also risks model collapse with the limited multi-modal annotated data. In this paper, we propose a universal framework named X-Prompt for all multi-modal video object segmentation tasks, designated as RGB+X. The X-Prompt framework first pre-trains a video object segmentation foundation model using RGB data, and then utilize the additional modality of the prompt to adapt it to downstream multi-modal tasks with limited data. Within the X-Prompt framework, we introduce the Multi-modal Visual Prompter (MVP), which allows prompting foundation model the with various modalities to segment objects precisely. We further propose the Multi-modal Adaptation Expert (MAEs) to adapt the foundation model with pluggable modality-specific knowledge without compromising the generalization capacity. To evaluate the effectiveness of the X-Prompt framework, we conduct extensive experiments on 3 tasks across 4 benchmarks. The proposed universal X-Prompt framework consistently outperforms the full fine-tuning paradigm and achieves state-of-the-art performance. Codes will be available. | X-Prompt: Multi-modal Visual Prompt for Video Object Segmentation | [
"Pinxue Guo",
"Wanyun Li",
"Hao Huang",
"Lingyi Hong",
"Xinyu Zhou",
"Zhaoyu Chen",
"Jinglun Li",
"Kaixun Jiang",
"Wei Zhang",
"Wenqiang Zhang"
] | Conference | poster | 2409.19342 | [
"https://github.com/pinxueguo/x-prompt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=RQctkul2GW | @inproceedings{
wu2024robust,
title={Robust Multimodal Sentiment Analysis of Image-Text Pairs by Distribution-Based Feature Recovery and Fusion},
author={Daiqing Wu and Dongbao Yang and Yu Zhou and Can Ma},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=RQctkul2GW}
} | As posts on social media increase rapidly, analyzing the sentiments embedded in image-text pairs has become a popular research topic in recent years. Although existing works achieve impressive accomplishments in simultaneously harnessing image and text information, they lack the considerations of possible low-quality and missing modalities. In real-world applications, these issues might frequently occur, leading to urgent needs for models capable of predicting sentiment robustly. Therefore, we propose a Distribution-based feature Recovery and Fusion (DRF) method for robust multimodal sentiment analysis of image-text pairs. Specifically, we maintain a feature queue for each modality to approximate their feature distributions, through which we can simultaneously handle low-quality and missing modalities in a unified framework. For low-quality modalities, we reduce their contributions to the fusion by quantitatively estimating modality qualities based on the distributions. For missing modalities, we build inter-modal mapping relationships supervised by samples and distributions, thereby recovering the missing modalities from available ones. In experiments, two disruption strategies that corrupt and discard some modalities in samples are adopted to mimic the low-quality and missing modalities in various real-world scenarios. Through comprehensive experiments on three publicly available image-text datasets, we demonstrate the universal improvements of DRF compared to SOTA methods under both two strategies, validating its effectiveness in robust multimodal sentiment analysis. | Robust Multimodal Sentiment Analysis of Image-Text Pairs by Distribution-Based Feature Recovery and Fusion | [
"Daiqing Wu",
"Dongbao Yang",
"Yu Zhou",
"Can Ma"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=ROsHwGMYeJ | @inproceedings{
xu2024mitigate,
title={Mitigate Catastrophic Remembering via Continual Knowledge Purification for Noisy Lifelong Person Re-Identification},
author={Kunlun Xu and Haozhuo Zhang and Yu Li and Yuxin Peng and Jiahuan Zhou},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=ROsHwGMYeJ}
} | Current lifelong person re-identification (LReID) methods focus on tackling a clean data stream with correct labels. When noisy data with wrong labels are given, their performance is severely degraded since the model inevitably and continually remembers erroneous knowledge induced by the noises. Moreover, the well-known catastrophic forgetting issue in LReID becomes even more challenging since the correct knowledge contained in the old model is disrupted by noisy labels. Such a practical noisy LReID task is important but challenging, and rare works attempted to handle it so far. In this paper, we initially investigate noisy LReID by proposing a Continual Knowledge Purification (CKP) method to address the catastrophic remembering of erroneous knowledge and catastrophic forgetting of correct knowledge simultaneously. Specifically, a Cluster-aware Data Purification module (CDP) is designed to obtain a cleaner subset of the given noisy data for learning. To achieve this, the label confidence is estimated based on the intra-identity clustering result where the high-confidence data are maintained. Besides, an Iterative Label Rectification (ILR) pipeline is proposed to rectify wrong labels by fusing the prediction and label information throughout the training epochs. Therefore, the noisy data are rectified progressively to facilitate new model learning. To handle the catastrophic remembering and forgetting issues, an Erroneous Knowledge Filtering (EKF) algorithm is proposed to estimate the knowledge correctness of the old model, and a weighted knowledge distillation loss is designed to transfer the correct old knowledge to the new model while excluding the erroneous one. Finally, a Noisy LReID benchmark is constructed for performance evaluation and extensive experimental results demonstrate that our proposed CKP method achieves state-of-the-art performance. | Mitigate Catastrophic Remembering via Continual Knowledge Purification for Noisy Lifelong Person Re-Identification | [
"Kunlun Xu",
"Haozhuo Zhang",
"Yu Li",
"Yuxin Peng",
"Jiahuan Zhou"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=RNj1rENImB | @inproceedings{
zhang2024informative,
title={Informative Point cloud Dataset Extraction for Classification via Gradient-based Points Moving},
author={Wenxiao Zhang and Ziqi Wang and Li Xu and Xun Yang and Jun Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=RNj1rENImB}
} | Point cloud plays a significant role in recent learning-based vision tasks, which contain additional information about the physical space compared to 2D images. However, such a 3D data format also results in more expensive computation costs to train a sophisticated network with large 3D datasets. Previous methods for point cloud compression focus on compacting the representation of each point cloud for better storage and transmission. In this paper, we introduce a new open problem in point cloud field: Can we compress a large point cloud dataset into a much smaller synthetic dataset while preserving the important information of the original large dataset?} In other words, we explore the possibility of training a network on a smaller dataset of informative point clouds extracted from the original large dataset but maintaining similar network performance. Training on this small synthetic dataset could largely improve the training efficiency. To explore this new open problem, we formulate it as a parameter-matching issue where a network could get similar network parameters after training on the original set and the generated synthetic set, respectively. We find that we could achieve this goal by moving the critical points within each initial point cloud through an iterative gradient matching strategy. We conduct extensive experiments on various synthetic and real-scanned 3D object classification benchmarks, showing that our synthetic dataset could achieve almost the same performance with only 5\% point clouds of ScanObjectNN dataset compared to training with the full dataset. | Informative Point cloud Dataset Extraction for Classification via Gradient-based Points Moving | [
"Wenxiao Zhang",
"Ziqi Wang",
"Li Xu",
"Xun Yang",
"Jun Liu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=REwjoWjVQm | @inproceedings{
huang2024languageguided,
title={Language-Guided Visual Prompt Compensation for Multi-Modal Remote Sensing Image Classification with Modality Absence},
author={Ling Huang and Wenqian Dong and Song xiao and Jiahui Qu and Yuanbo Yang and Yunsong Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=REwjoWjVQm}
} | Joint classification of multi-modal remote sensing images has achieved great success thanks to complementary advantages of multi-modal images. However, modality absence is a common dilemma in real world caused by imaging conditions, which leads to a breakdown of most classification methods that rely on complete modalities. Existing approaches either learn shared representations or train specific models for each absence case so that they commonly confront the difficulty of balancing the complementary advantages of the modalities and scalability of the absence case. In this paper, we propose a language-guided visual prompt compensation network (LVPCnet) to achieve joint classification in case of arbitrary modality absence using a unified model that simultaneously considers modality complementarity. It embeds missing modality-specific knowledge into visual prompts to guide the model in capturing complete modal information from available ones for classification. Specifically, a language-guided visual feature decoupling stage (LVFD-stage) is designed to extract shared and specific modal feature from multi-modal images, establishing a complementary representation model of complete modalities. Subsequently, an absence-aware visual prompt compensation stage (VPC-stage) is proposed to learn visual prompts containing missing modality-specific knowledge through cross-modal representation alignment, further guiding the complementary representation model to reconstruct modality-specific features for missing modalities from available ones based on the learned prompts. The proposed VPC-stage entails solely training visual prompts to perceive missing information without retraining the model, facilitating effective scalability to arbitrary modal missing scenarios. Systematic experiments conducted on three public datasets have validated the effectiveness of the proposed approach. | Language-Guided Visual Prompt Compensation for Multi-Modal Remote Sensing Image Classification with Modality Absence | [
"Ling Huang",
"Wenqian Dong",
"Song xiao",
"Jiahui Qu",
"Yuanbo Yang",
"Yunsong Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=RCD9rwbqt4 | @inproceedings{
liu2024controllable,
title={Controllable Procedural Generation of Landscapes},
author={Jia-Hong Liu and Shao-Kui Zhang and Chuyue Zhang and Song-Hai Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=RCD9rwbqt4}
} | Landscapes, recognized for their indispensable role in large-scale scenes, are experiencing growing demand. However, the manual modeling of such content is labor-intensive and lacks efficiency. Procedural Content Generation (PCG) techniques enable the rapid generation of diverse landscape elements. Nevertheless, ordinary users may encounter difficulties controlling these methods for desired results. In this paper, we introduce a controllable framework for procedurally generating landscapes. We integrate state-of-the-art Large Language Models (LLMs) to enhance user accessibility and control. By converting plain text inputs into parameters through LLMs, our framework allows ordinary users to generate a batch of plausible landscapes tailored to their specifications. A parameter-controlled PCG procedure is designed to leverage optimization techniques and employ rule-based refinements. It achieves harmonious layering in terrains, zoning, and roads while enabling aesthetic arrangement of vegetation and artificial elements. Extensive experiments demonstrate our framework's effectiveness in generating landscapes comparable to those crafted by experienced architects. Our framework has the potential to enhance the productivity of landscape designers significantly. | Controllable Procedural Generation of Landscapes | [
"Jia-Hong Liu",
"Shao-Kui Zhang",
"Chuyue Zhang",
"Song-Hai Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=RBCxmxAgqo | @inproceedings{
duan2024reasonandexecute,
title={Reason-and-Execute Prompting: Enhancing MultiModal Large Language Models for Solving Geometry Questions},
author={Xiuliang Duan and Dating Tan and Liangda Fang and Yuyu Zhou and Chaobo He and Ziliang Chen and Lusheng Wu and Guanliang Chen and Zhiguo Gong and Weiqi Luo and Quanlong Guan},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=RBCxmxAgqo}
} | MultiModal Large Language Models (MM-LLMs) have demonstrated exceptional reasoning abilities in various visual question-answering tasks. However, they encounter significant challenges when answering geometry questions. These challenges arise due to the need to engage in rigorous reasoning and executing precise arithmetic. To enhance the ability of LLMs to solve multimodal geometric questions, we propose Reason-and-Execute (RaE) prompting: a new prompting method specifically designed for enhancing MM-LLMs to solve geometric questions. Specifically, we first designed a rigorous reasoning process based on domain knowledge of geometry, using a reverse thinking approach, and obtained the precise arithmetic steps required for solving the question. Secondly, based on the analysis of the reasoning process, we designed code blocks in a programming language to implement the arithmetic functions. Finally, by executing the contents of the code blocks using an interpreter, we obtained the answers to the geometric questions. We evaluated the accuracy of 9 models in answering questions on 6 datasets (including four geometry datasets and two science datasets) using different prompting templates. Specifically, in the main experimental result, our RaE showed a maximum enhancement of 12.8% compared to other prompting methods, which proves strong reasoning and arithmetic abilities in solving geometric questions of our method. Moreover, we analyzed the impact of answering from the perspective of solving geometric problems by considering multiple factors, including domain knowledge, geometry shapes, understanding of the question text, and language. This once again emphasizes that our method has passed the comprehensive test of solving geometry questions. The source code and data will be published in a GitHub repository. | Reason-and-Execute Prompting: Enhancing MultiModal Large Language Models for Solving Geometry Questions | [
"Xiuliang Duan",
"Dating Tan",
"Liangda Fang",
"Yuyu Zhou",
"Chaobo He",
"Ziliang Chen",
"Lusheng Wu",
"Guanliang Chen",
"Zhiguo Gong",
"Weiqi Luo",
"Quanlong Guan"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=RAcMolFYmK | @inproceedings{
lin2024peneo,
title={{PE}neo: Unifying Line Extraction, Line Grouping, and Entity Linking for End-to-end Document Pair Extraction},
author={Zening Lin and Jiapeng Wang and Teng Li and Wenhui Liao and DAYI HUANG and Longfei Xiong and Lianwen Jin},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=RAcMolFYmK}
} | Document pair extraction aims to identify key and value entities as well as their relationships from visually-rich documents. Most existing methods divide it into two separate tasks: semantic entity recognition (SER) and relation extraction (RE). However, simply concatenating SER and RE serially can lead to severe error propagation, and it fails to handle cases like multi-line entities in real scenarios. To address these issues, this paper introduces a novel framework, **PEneo** (**P**air **E**xtraction **n**ew d**e**coder **o**ption), which performs document pair extraction in a unified pipeline, incorporating three concurrent sub-tasks: line extraction, line grouping, and entity linking. This approach alleviates the error accumulation problem and can handle the case of multi-line entities. Furthermore, to better evaluate the model's performance and to facilitate future research on pair extraction, we introduce RFUND, a re-annotated version of the commonly used FUNSD and XFUND datasets, to make them more accurate and cover realistic situations. Experiments on various benchmarks demonstrate PEneo's superiority over previous pipelines, boosting the performance by a large margin (e.g., 19.89%-22.91% F1 score on RFUND-EN) when combined with various backbones like LiLT and LayoutLMv3, showing its effectiveness and generality. Codes and the new annotations will be open to the public. | PEneo: Unifying Line Extraction, Line Grouping, and Entity Linking for End-to-end Document Pair Extraction | [
"Zening Lin",
"Jiapeng Wang",
"Teng Li",
"Wenhui Liao",
"DAYI HUANG",
"Longfei Xiong",
"Lianwen Jin"
] | Conference | poster | 2401.03472 | [
"https://github.com/ZeningLin/PEneo"
] | https://huggingface.co/papers/2401.03472 | 1 | 0 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=RAUOcGo3Qt | @inproceedings{
huang2024crest,
title={{CREST}: Cross-modal Resonance through Evidential Deep Learning for Enhanced Zero-ShoT Learning},
author={Haojian Huang and Xiaozhennn Qiao and Zhuo Chen and Haodong Chen and Binyu Li and Zhe Sun and Mulin Chen and Xuelong Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=RAUOcGo3Qt}
} | Zero-shot learning (ZSL) enables the recognition of novel classes by leveraging semantic knowledge transfer from known to unknown categories. This knowledge, typically encapsulated in attribute descriptions, aids in identifying class-specific visual features, thus facilitating visual-semantic alignment and improving ZSL performance. However, real-world challenges such as distribution imbalances and attribute co-occurrence among instances often hinder the discernment of local variances in images, a problem exacerbated by the scarcity of fine-grained, region-specific attribute annotations. Moreover, the variability in visual presentation within categories can also skew attribute-category associations. In response, we propose a bidirectional cross-modal ZSL approach CREST. It begins by extracting representations for attribute and visual localization and employs Evidential Deep Learning (EDL) to measure underlying epistemic uncertainty, thereby enhancing the model's resilience against hard negatives. CREST incorporates dual learning pathways, focusing on both visual-category and attribute-category alignments, to ensure robust correlation between latent and observable spaces. Moreover, we introduce an uncertainty-informed cross-modal fusion technique to refine visual-attribute inference. Extensive experiments demonstrate our model's effectiveness and unique explainability across multiple datasets. Our code and data are available at: https://anonymous.4open.science/r/CREST-1CEC. | CREST: Cross-modal Resonance through Evidential Deep Learning for Enhanced Zero-ShoT Learning | [
"Haojian Huang",
"Xiaozhennn Qiao",
"Zhuo Chen",
"Haodong Chen",
"Binyu Li",
"Zhe Sun",
"Mulin Chen",
"Xuelong Li"
] | Conference | poster | 2404.09640 | [
"https://github.com/JethroJames/CREST"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=R6yqrO3tNU | @inproceedings{
liao2024unidllora,
title={Uni-DlLo{RA}: Style Fine-Tuning for Fashion Image Translation},
author={Fangjian Liao and Xingxing Zou and Waikeung Wong},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=R6yqrO3tNU}
} | Image-to-image (i2i) translation has achieved notable success, yet remains challenging in scenarios like real-to-illustrative style transfer of fashion. Existing methods focus on enhancing the generative model with diversity while lacking ID-preserved domain translation. This paper introduces a novel model named Uni-DlLoRA to release this constraint. The proposed model combines the original images within a pretrained diffusion-based model using the proposed Uni-adapter extractors, while adopting the proposed Dual-LoRA module to provide distinct style guidance. This approach optimizes generative capabilities and reduces the number of additional parameters required. In addition, a new multimodal dataset featuring higher-quality images with captions built upon an existing real-to-illustration dataset is proposed. Experimentation validates the effectiveness of our proposed method. | Uni-DlLoRA: Style Fine-Tuning for Fashion Image Translation | [
"Fangjian Liao",
"Xingxing Zou",
"Waikeung Wong"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=R1OjhQ0iAt | @inproceedings{
zhao2024decoder,
title={Decoder Pretraining with only Text for Scene Text Recognition},
author={Shuai Zhao and Yongkun Du and Zhineng Chen and Yu-Gang Jiang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=R1OjhQ0iAt}
} | Scene text recognition (STR) pre-training methods have achieved remarkable progress, primarily relying on synthetic datasets. However, the domain gap between synthetic and real images poses a challenge in acquiring feature representations that align well with images on real scenes, thereby limiting the performance of these methods. We note that vision-language models like CLIP, pre-trained on extensive real image-text pairs, effectively align images and text in a unified embedding space, suggesting the potential to derive the representations of real images from text alone. Building upon this premise, we introduce a novel method named Decoder Pre-training with only text for STR (DPTR). DPTR treats text embeddings produced by the CLIP text encoder as pseudo visual embeddings and uses them to pre-train the decoder. An Offline Randomized Perturbation (ORP) strategy is introduced. It enriches the diversity of text embeddings by incorporating natural image embeddings extracted from the CLIP image encoder, effectively directing the decoder to acquire the potential representations of real images. In addition, we introduce a Feature Merge Unit (FMU) that guides the extracted visual embeddings focusing on the character foreground within the text image, thereby enabling the pre-trained decoder to work more efficiently and accurately. Extensive experiments across various STR decoders and language recognition tasks underscore the broad applicability and remarkable performance of DPTR, providing a novel insight for STR pre-training. Code is available at https://github.com/Topdu/OpenOCR. | Decoder Pretraining with only Text for Scene Text Recognition | [
"Shuai Zhao",
"Yongkun Du",
"Zhineng Chen",
"Yu-Gang Jiang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=QxectWcssD | @inproceedings{
wang2024oneshot,
title={One-Shot Sequential Federated Learning for Non-{IID} Data by Enhancing Local Model Diversity},
author={Naibo Wang and Yuchen Deng and Wenjie Feng and Shichen Fan and Jianwei Yin and See-Kiong Ng},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=QxectWcssD}
} | Traditional federated learning mainly focuses on parallel settings (PFL), which can suffer significant communication and computation costs. In contrast, one-shot and sequential federated learning (SFL) have emerged as innovative paradigms to alleviate these costs. However, the issue of non-IID (independent and identically distributed) data persists as a significant challenge in one-shot and SFL settings, exacerbated by the restricted communication between clients. In this paper, we improve the one-shot sequential federated learning for non-IID data by proposing a local model diversity-enhancing strategy. Specifically, to leverage the potential of local model diversity for improving model performance, we introduce a local model pool for each client that comprises diverse models generated during local training, and propose two distance measurements to further enhance the model diversity and mitigate the effect of non-IID data. Consequently, our proposed framework can improve the global model performance while maintaining low communication costs. Extensive experiments demonstrate that our method exhibits superior performance to existing one-shot PFL methods and achieves better accuracy compared with state-of-the-art one-shot SFL methods on both label-skew and domain-shift tasks (e.g., 6\%+ accuracy improvement on the CIFAR-10 dataset). | One-Shot Sequential Federated Learning for Non-IID Data by Enhancing Local Model Diversity | [
"Naibo Wang",
"Yuchen Deng",
"Wenjie Feng",
"Shichen Fan",
"Jianwei Yin",
"See-Kiong Ng"
] | Conference | poster | 2404.12130 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=QuE5lit38c | @inproceedings{
huang2024anatomical,
title={Anatomical Prior Guided Spatial Contrastive Learning for Few-Shot Medical Image Segmentation},
author={Wendong Huang and Jinwu Hu and Xiuli Bi and Bin Xiao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=QuE5lit38c}
} | Few-shot semantic segmentation has considerable potential for low-data scenarios, especially for medical images that require expert-level dense annotations. Existing few-shot medical image segmentation methods strive to deal with the task by means of prototype learning. However, this scheme relies on support prototypes to guide the segmentation of query images, ignoring the rich anatomical prior knowledge in medical images, which hinders effective feature enhancement for medical images. In this paper, we propose an anatomical prior guided spatial contrastive learning, called APSCL, which exploits anatomical prior knowledge derived from medical images to construct contrastive learning from a spatial perspective for few-shot medical image segmentation. The new framework forces the model to learn the features in line with the embedded anatomical representations. Besides, to fully exploit the guidance information of the support samples, we design a mutual guidance decoder to predict the label of each pixel in the query image. Furthermore, our APSCL can be trained end-to-end in the form of episodic training. Comprehensive experiments on three challenging medical image datasets, i.e., CHAOS-T2, MS-CMRSeg, and Synapse, prove that our method significantly surpasses state-of-the-art few-shot medical segmentation methods, with a mean improvement of 3.61%, 2.30%, and 6.38% on the Dice score, respectively. | Anatomical Prior Guided Spatial Contrastive Learning for Few-Shot Medical Image Segmentation | [
"Wendong Huang",
"Jinwu Hu",
"Xiuli Bi",
"Bin Xiao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Qr2iZ1KXwE | @inproceedings{
long2024learning,
title={Learning to Handle Large Obstructions in Video Frame Interpolation},
author={Libo Long and Xiao Hu and Jochen Lang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Qr2iZ1KXwE}
} | Video frame interpolation based on optical flow has made great progress in recent years. Most of the previous studies have focused on improving the quality of clean videos. However, many real-world videos contain large obstructions which cause blur and artifacts making the video discontinuous. To address this challenge, we propose our Obstruction Robustness Framework (ORF) that enhances the robustness of existing VFI networks in the face of large obstructions. The ORF contains two components: (1) A feature repair module that first captures ambiguous pixels in the synthetic frame by a region similarity map, then repairs them with a cross-overlap attention module. (2) A data augmentation strategy that enables the network to handle dynamic obstructions without extra data. To the best of our knowledge, this is the first work that explicitly addresses the error caused by large obstructions in video frame interpolation. By using previous state-of-the-art methods as backbones, our method not only improves the results in original benchmarks but also significantly enhances the interpolation quality for videos with obstructions. | Learning to Handle Large Obstructions in Video Frame Interpolation | [
"Libo Long",
"Xiao Hu",
"Jochen Lang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=QqG3wI3c8L | @inproceedings{
shen2024resisting,
title={Resisting Over-Smoothing in Graph Neural Networks via Dual-Dimensional Decoupling},
author={Wei Shen and Mang Ye and Wenke Huang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=QqG3wI3c8L}
} | Graph Neural Networks (GNNs) are widely employed to derive meaningful node representations from graphs. Despite their success, deep GNNs frequently grapple with the oversmoothing issue, where node representations become highly indistinguishable due to repeated aggregations. In this work, we consider the oversmoothing issue from two aspects of the node embedding space: dimension and instance. Specifically, while existing methods primarily concentrate on instance-level node relations to mitigate oversmoothing, we propose to mitigate oversmoothing at dimension level. We reveal the heightened information redundancy between dimensions which diminishes information diversity and impairs node differentiation in GNNs. Motivated by this insight, we propose Dimension-Level Decoupling (DLD) to reduce dimension redundancy, enhancing dimensional-level node differentiation. Besides, at the instance level, the neglect of class differences leads to vague classification boundaries. Hence, we introduce Instance-Level Class-Difference Decoupling (ICDD) that repels inter-class nodes and attracts intra-class nodes, improving the instance-level node discrimination with clear classification boundaries. Additionally, we introduce a novel evaluation metric that considers the impact of class differences on node distances, facilitating precise oversmoothing measurement. Extensive experiments demonstrate the effectiveness of our method Dual-Dimensional Class-Difference Decoupling (DDCD) across diverse scenarios. | Resisting Over-Smoothing in Graph Neural Networks via Dual-Dimensional Decoupling | [
"Wei Shen",
"Mang Ye",
"Wenke Huang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Qp0d7CfKq8 | @inproceedings{
huang2024eventguided,
title={Event-Guided Rolling Shutter Correction with Time-Aware Cross-Attentions},
author={Hefei Huang and Xu Jia and Xinyu Zhang and Shengming Li and Huchuan Lu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Qp0d7CfKq8}
} | Many consumer cameras with rolling shutter (RS) CMOS would suffer undesired distortion and artifacts, particularly when objects experiences fast motion. The neuromorphic event camera, with high temporal resolution events, could bring much benefit to the RS correction process. In this work, we explore the characteristics of RS images and event data for the design of the rolling shutter correction (RSC) model. Specifically, the relationship between RS images and event data is modeled by incorporating time encoding to the computation of cross-attention in transformer encoder to achieve time-aware multi-modal information fusion.
Features from RS images enhanced by event data are adopted as keys and values in transformer decoder, providing source for appearance, while features from event data enhanced by RS images are adopted as queries, providing spatial transition information.
By embedding the time information of the desired GS image into the query, the transformer with deformable attention is capable of producing the target GS image.
To enhance the model's generalization ability, we propose to further self-supervise the model by cycling between time coordinate systems corresponding to RS images and GS images.
Extensive evaluations over both synthetic and real datasets demonstrate that the proposed method performs favorably against state-of-the-art approaches. | Event-Guided Rolling Shutter Correction with Time-Aware Cross-Attentions | [
"Hefei Huang",
"Xu Jia",
"Xinyu Zhang",
"Shengming Li",
"Huchuan Lu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=QoBHdPW1pM | @inproceedings{
wang2024contrastive,
title={Contrastive Graph Distribution Alignment for Partially View-Aligned Clustering},
author={Xibiao Wang and Hang Gao and Xindian Wei and Liang Peng and Rui Li and Cheng Liu and Si Wu and Hau-San Wong},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=QoBHdPW1pM}
} | Partially View-aligned Clustering (PVC) presents a challenge as it requires a comprehensive exploration of complementary and consistent information in the presence of partial alignment of view data. Existing PVC methods typically learn view correspondence based on latent features that are expected to contain common semantic information. However, latent features obtained from heterogeneous spaces, along with the enforcement of alignment into the same feature dimension, can introduce cross-view discrepancies. In particular, partially view-aligned data lacks sufficient shared correspondences for the critical common semantic feature learning, resulting in inaccuracies in establishing meaningful correspondences between latent features across different views. While feature representations may differ across views, instance relationships within each view could potentially encode consistent common semantics across views. Motivated by this, our aim is to learn view correspondence based on graph distribution metrics that capture semantic view-invariant instance relationships. To achieve this, we utilize similarity graphs to depict instance relationships and learn view correspondence by aligning semantic similarity graphs through optimal transport with graph distribution. This facilitates the precise learning of view alignments, even in the presence of heterogeneous view-specific feature distortions. Furthermore, leveraging well-established cross-view correspondence, we introduce a cross-view contrastive learning to learn semantic features by exploiting consistency information. The resulting meaningful semantic features effectively isolate shared latent patterns, avoiding the inclusion of irrelevant private information. We conduct extensive experiments on several real datasets, demonstrating the effectiveness of our proposed method for the PVC task. | Contrastive Graph Distribution Alignment for Partially View-Aligned Clustering | [
"Xibiao Wang",
"Hang Gao",
"Xindian Wei",
"Liang Peng",
"Rui Li",
"Cheng Liu",
"Si Wu",
"Hau-San Wong"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Qk9o1DJsMv | @inproceedings{
wang2024megasurf,
title={MegaSurf: Scalable Large Scene Neural Surface Reconstruction},
author={Yusen Wang and Kaixuan Zhou and Wenxiao Zhang and Chunxia Xiao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Qk9o1DJsMv}
} | We present MegaSurf, a Neural Surface Reconstruction (NSR) framework designed to reconstruct 3D models of large scenes from aerial images. Many methods utilize geometry cues to overcome the shape-radiance ambiguity, which would produce large geometric errors. However, directly using inevitable imprecise geometric cues would lead to degradation in the reconstruction results, especially on large-scale scenes. To address this phenomenon, we propose a Learnable Geometric Guider (LG Guider) to learn a sampling field from reliable geometric cues. The LG Guider decides which position should fit the input radiance and can be continuously refined by rendering loss. Our MegaSurf uses a Divide-and-Conquer training strategy to address the synchronization issue between the Guider and the lagging NSR's radiance field. This strategy enables the Guider to transmit the information it carried to the radiance field without being disrupted by the gradients back-propagated from the lagging rendering loss at the early stage of training. Furthermore, we propose a Fast PatchMatch MVS module to derive the geometric cues in the planer regions that help overcome ambiguity.
Experiments on several aerial datasets show that MegaSurf can overcome ambiguity while preserving high-fidelity details. Compared to SOTA methods, MegaSurf achieves superior reconstruction accuracy of large scenes and boosts the acquisition of geometric cues more than four times. | MegaSurf: Scalable Large Scene Neural Surface Reconstruction | [
"Yusen Wang",
"Kaixuan Zhou",
"Wenxiao Zhang",
"Chunxia Xiao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=QhzWn7Jz8u | @inproceedings{
xu2024prag,
title={P-{RAG}: Progressive Retrieval Augmented Generation For Planning on Embodied Everyday Task},
author={Weiye Xu and Min Wang and Wengang Zhou and Houqiang Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=QhzWn7Jz8u}
} | Embodied Everyday Task is a popular task in the embodied AI community, requiring agents to make a sequence of actions based on natural language instructions and visual observations. Traditional learning-based approaches face two challenges. Firstly, natural language instructions often lack explicit task planning. Secondly, extensive training is required to equip models with knowledge of the task environment. Previous works based on Large Language Model (LLM) either suffer from poor performance due to the lack of task-specific knowledge or rely on ground truth as few-shot samples. To address the above limitations, we propose a novel approach called Progressive Retrieval Augmented Generation (P-RAG), which not only effectively leverages the powerful language processing capabilities of LLMs but also progressively accumulates task-specific knowledge without ground-truth. Compared to the conventional RAG methods, which retrieve relevant information from the database in a one-shot manner to assist generation, P-RAG introduces an iterative approach to progressively update the database. In each iteration, P-RAG retrieves the latest database and obtains historical information from the previous interaction as experiential references for the current interaction. Moreover, we also introduce a more granular retrieval scheme that not only retrieves similar tasks but also incorporates retrieval of similar situations to provide more valuable reference experiences. Extensive experiments reveal that P-RAG achieves competitive results without utilizing ground truth and can even further improve performance through self-iterations. We will release the source code to the public. | P-RAG: Progressive Retrieval Augmented Generation For Planning on Embodied Everyday Task | [
"Weiye Xu",
"Min Wang",
"Wengang Zhou",
"Houqiang Li"
] | Conference | poster | 2409.11279 | [
""
] | https://huggingface.co/papers/2409.11279 | 0 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=QdqpeWkOBK | @inproceedings{
xuan2024when,
title={When ControlNet Meets Inexplicit Masks: A Case Study of ControlNet on its Contour-following Ability},
author={Wenjie Xuan and Yufei Xu and Shanshan Zhao and Chaoyue Wang and Juhua Liu and Bo Du and Dacheng Tao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=QdqpeWkOBK}
} | ControlNet excels at creating content that closely matches precise contours in user-provided masks. However, when these masks contain noise, as a frequent occurrence with non-expert users, the output would include unwanted artifacts. This paper first highlights the crucial role of controlling the impact of these inexplicit masks with diverse deterioration levels through in-depth analysis. Subsequently, to enhance controllability with inexplicit masks, an advanced Shape-aware ControlNet consisting of a deterioration estimator and a shape-prior modulation block is devised. The deterioration estimator assesses the deterioration factor of the provided masks. Then this factor is utilized in the modulation block to adaptively modulate the model's contour-following ability, which helps it dismiss the noise part in the inexplicit masks. Extensive experiments prove its effectiveness in encouraging ControlNet to interpret inaccurate spatial conditions robustly rather than blindly following the given contours, suitable for diverse kinds of conditions. We showcase application scenarios like modifying shape priors and composable shape-controllable generation. Codes are soon available. | When ControlNet Meets Inexplicit Masks: A Case Study of ControlNet on its Contour-following Ability | [
"Wenjie Xuan",
"Yufei Xu",
"Shanshan Zhao",
"Chaoyue Wang",
"Juhua Liu",
"Bo Du",
"Dacheng Tao"
] | Conference | poster | 2403.00467 | [
"https://github.com/DREAMXFAR/Shape-aware-ControlNet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=QayT1wjqYB | @inproceedings{
qiu2024manipulable,
title={Manipulable Ne{RF} using Recursively Subdivided Tetrahedra},
author={Zherui Qiu and Chenqu Ren and Kaiwen Song and Xiaoyi Zeng and Leyuan Yang and Juyong Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=QayT1wjqYB}
} | While neural radiance fields (NeRF) have shown promise in novel view synthesis, their implicit representation limits explicit control over object manipulation. Existing research has proposed the integration of explicit geometric proxies to enable deformation. However, these methods face two primary challenges: firstly, the time-consuming and computationally demanding tetrahedralization process; and secondly, handling complex or thin structures often leads to either excessive, storage-intensive tetrahedral meshes or poor-quality ones that impair deformation capabilities. To address these challenges, we propose DeformRF, a method that seamlessly integrates the manipulability of tetrahedral meshes with the high-quality rendering capabilities of feature grid representations. To avoid ill-shaped tetrahedra and tetrahedralization for each object, we propose a two-stage training strategy. Starting with an almost-regular tetrahedral grid, our model initially retains key tetrahedra surrounding the object and subsequently refines object details using finer-granularity mesh in the second stage. We also present the concept of recursively subdivided tetrahedra to create higher-resolution meshes implicitly. This enables multi-resolution encoding while only necessitating the storage of the coarse tetrahedral mesh generated in the first training stage. We conduct a comprehensive evaluation of our DeformRF on both synthetic and real-captured datasets. Both quantitative and qualitative results demonstrate the effectiveness of our method for novel view synthesis and deformation tasks. Project page: https://ustc3dv.github.io/DeformRF/ | Deformable NeRF using Recursively Subdivided Tetrahedra | [
"Zherui Qiu",
"Chenqu Ren",
"Kaiwen Song",
"Xiaoyi Zeng",
"Leyuan Yang",
"Juyong Zhang"
] | Conference | poster | 2410.04402 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=QZkHgKqIuG | @inproceedings{
cai2024prism,
title={{PRISM}: {PR}ogressive dependency maxImization for Scale-invariant image Matching},
author={Xudong Cai and Yongcai Wang and Lun Luo and Minhang Wang and Deying Li and Jintao Xu and Weihao Gu and Rui Ai},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=QZkHgKqIuG}
} | Image matching aims at identifying corresponding points between a pair of images. Currently, detector-free methods have shown impressive performance in challenging scenarios, thanks to their capability of generating dense matches and global receptive field. However, performing feature interaction and proposing matches across the entire image is unnecessary, as not all image regions contribute beneficially to the matching process. Interacting and matching in unmatchable areas can introduce errors, reducing matching accuracy and efficiency. Furthermore, the scale discrepancy issue still troubles existing methods. To address above issues, we propose PRogressive dependency maxImization for Scale-invariant image Matching (PRISM), which jointly prunes irrelevant patch features and tackles the scale discrepancy. To do this, we first present a Multi-scale Pruning Module (MPM) to adaptively prune irrelevant features by maximizing the dependency between the two feature sets. Moreover, we design the Scale-Aware Dynamic Pruning Attention (SADPA) to aggregate information from different scales via a hierarchical design. Our method's superior matching performance and generalization capability are confirmed by leading accuracy across various evaluation benchmarks and downstream tasks. The code will be publicly available. | PRISM: PRogressive dependency maxImization for Scale-invariant image Matching | [
"Xudong Cai",
"Yongcai Wang",
"Lun Luo",
"Minhang Wang",
"Deying Li",
"Jintao Xu",
"Weihao Gu",
"Rui Ai"
] | Conference | poster | 2408.03598 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=QZCGSrjpJc | @inproceedings{
chen2024egocentric,
title={Egocentric Vehicle Dense Video Captioning},
author={Feiyu Chen and Cong Xu and Qi Jia and Yihua Wang and Yuhan Liu and Zhang Haotian and Endong Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=QZCGSrjpJc}
} | Typical dense video captioning mostly concentrates on third-person videos, which are generally characterized by relatively delineated steps among events as seen in edited instructional videos. However, such videos do not genuinely reflect the way we perceive our real lives. Instead, we observe the world from an egocentric viewpoint and witness only continuous unedited footage. To facilitate further research, we introduce a new task, Egocentric Vehicle Dense Video Captioning, in classic first-person driving scenario. This is a multi-modal, multi-task project for a comprehensive understanding of untrimmed, egocentric driving videos. It consists of three sub-tasks that focus on event location, event captioning, and vehicle state estimation separately. For the purpose of accomplishing these tasks, it is necessary to deal with at least three challenges, those are extracting relevant ego-motion information, describing driving behavior and understanding the underlying rationale, as well as resolving the boundary ambiguity problem. In response, we devise corresponding solutions, encompassing a vehicle ego-motion learning strategy and a novel adjacent contrastive learning strategy, which effectively address the aforementioned issues to a certain extent. We validate our method by conducting extensive experiments on the BDD-X dataset, all of which show promising results and achieve new state-of-the-art performance on most metrics, which proves the effectiveness of our approach. | Egocentric Vehicle Dense Video Captioning | [
"Feiyu Chen",
"Cong Xu",
"Qi Jia",
"Yihua Wang",
"Yuhan Liu",
"Zhang Haotian",
"Endong Wang"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=QPbeKwJNMb | @inproceedings{
du2024reversed,
title={Reversed in Time: A Novel Temporal-Emphasized Benchmark for Cross-Modal Video-Text Retrieval},
author={Yang Du and Yuqi Liu and Qin Jin},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=QPbeKwJNMb}
} | Video-text retrieval is an important task in the multimodal understanding field.Temporal understanding makes video-text retrieval more challenging than image-text retrieval. However, we find that the widely used video-text benchmarks have shortcomings in assessing model's retrieval ability, especially in temporal understanding, causing large-scale image-text pre-trained models can already achieve comparable zero-shot performance with video-text pre-trained models.In this paper, we introduce RTime, a novel temporal-emphasized video-text retrieval dataset, constructed through a top-down three-step process. We first obtain videos of actions or events with significant temporality, and then reverse these videos to create harder negative samples. We recruit annotators to judge the significance and reversibility of candidate videos, and then write captions for qualified videos. We further adopt GPT-4 to extend more captions based on human-written captions. Our RTime dataset currently consists of 21k videos with 10 captions per video, totalling about 122 hours. Based on RTime, we propose three retrieval benchmark tasks: RTime-Origin, RTime-Hard, and RTime-Binary.We further enforce leveraging harder-negatives in model training, and benchmark a variety of video-text models on RTime. Extensive experiment analysis proves that RTime indeed poses new and higher challenges to video-text retrieval.We will release our RTime benchmarks to further advance video-text retrieval and multimodal understanding research. | Reversed in Time: A Novel Temporal-Emphasized Benchmark for Cross-Modal Video-Text Retrieval | [
"Yang Du",
"Yuqi Liu",
"Qin Jin"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=QOmkk07hIH | @inproceedings{
luo2024shapley,
title={Shapley Value-based Contrastive Alignment for Multimodal Information Extraction},
author={Wen Luo and Yu Xia and Shen Tianshu and Sujian Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=QOmkk07hIH}
} | The rise of social media and the exponential growth of multimodal communication necessitates advanced techniques for Multimodal Information Extraction (MIE). However, existing methodologies primarily rely on direct Image-Text interactions, a paradigm that often face the significant challenges due to semantic and modality gaps between images and text. In this paper, we introduce a new paradigm of Image-Context-Text interaction, where large multimodal models (LMMs) are utilized to generate descriptive textual context to bridge these gaps. In line with this paradigm, we propose a novel Shapley Value-based Contrastive Alignment (Shap-CA) method, which aligns both context-text and context-image pairs. Shap-CA initially applies the Shapley value concept from cooperative game theory to assess the individual contribution of each element in the set of contexts, texts and images towards total semantic and modality overlaps. Following this quantitative evaluation, a contrastive learning strategy is employed to enhance the interactive contribution within context-text/image pairs, while minimizing the influence across these pairs. Furthermore, we design an adaptive fusion module for selective cross-modal fusion. Extensive experiments across four MIE datasets demonstrate that our method significantly outperforms existing state-of-the-art methods. Code will be released upon acceptance. | Shapley Value-based Contrastive Alignment for Multimodal Information Extraction | [
"Wen Luo",
"Yu Xia",
"Shen Tianshu",
"Sujian Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=QMErLBrKVy | @inproceedings{
fang2024sentimentoriented,
title={Sentiment-Oriented Sarcasm Integration: Effective Enhancement of Video Sentiment Analysis with Sarcasm Assistance},
author={Junlin Fang and Wenya Wang and Guosheng Lin and Fengmao Lv},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=QMErLBrKVy}
} | Sarcasm is an intricate expression phenomenon and has garnered increasing attentions over the recent years, especially for multimodal contexts such as videos. Nevertheless, despite being a significant aspect of human sentiment, the effect of sarcasm is consistently overlooked in sentiment analysis. Videos with sarcasm often convey sentiments that diverge or even contradict their explicit messages. Prior works mainly concentrate on simply modeling sarcasm and sentiment features by utilizing the Multi-Task Learning (MTL) framework, which we found introduces detrimental interplays between the sarcasm detection task and sentiment analysis task. Therefore, this study explores the effective enhancement of video sentiment analysis through the incorporation of sarcasm information. To this end, we propose the Progressively Sentiment-oriented Sarcasm Refinement and Integration (PS2RI) framework, which focuses on modeling sentiment-oriented sarcasm features to enhance sentiment prediction. Instead of naively combining sarcasm detection and sentiment prediction under an MTL framework, PS2RI iteratively performs the sentiment-oriented sarcasm refinement and sarcasm integration operations within the sentiment recognition framework, in order to progressively learn sarcasm-aware sentiment feature without suffering the detrimental interplays caused by information irrelevant to the sentiment analysis task.
Extensive experiments are conducted to validate both the effectiveness and scalability of our approach. | Sentiment-Oriented Sarcasm Integration: Effective Enhancement of Video Sentiment Analysis with Sarcasm Assistance | [
"Junlin Fang",
"Wenya Wang",
"Guosheng Lin",
"Fengmao Lv"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=QJ7LbzviHm | @inproceedings{
yu2024overcoming,
title={Overcoming Spatial-Temporal Catastrophic Forgetting for Federated Class-Incremental Learning},
author={Hao Yu and Xin Yang and Xin Gao and Yihui Feng and Hao Wang and Yan Kang and Tianrui Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=QJ7LbzviHm}
} | This paper delves into federated class-incremental learning (FCiL), where new classes appear continually or even privately to local clients.
However, existing FCiL methods suffer from the problem of spatial-temporal catastrophic forgetting, i.e., forgetting the previously learned knowledge over time and the client-specific information owned by different clients.
Additionally, private class and knowledge heterogeneity amongst local clients further exacerbate spatial-temporal forgetting, making FCiL challenging to apply.
To address these issues, we propose Federated Class-specific Binary Classifier (FedCBC), an innovative approach to transferring and fusing knowledge across both temporal and spatial perspectives.
FedCBC consists of two novel components: (1) continual personalization that distills previous knowledge from a global model to multiple local models, and (2) selective knowledge fusion that enhances knowledge integration of the same class from divergent clients and shares private knowledge with other clients.
Extensive experiments using three newly-formulated metrics (termed GA, KRS, and KRT) demonstrate the effectiveness of the proposed approach. | Overcoming Spatial-Temporal Catastrophic Forgetting for Federated Class-Incremental Learning | [
"Hao Yu",
"Xin Yang",
"Xin Gao",
"Yihui Feng",
"Hao Wang",
"Yan Kang",
"Tianrui Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=QHRNR64J1m | @inproceedings{
zhang2024from,
title={From Speaker to Dubber: Movie Dubbing with Prosody and Duration Consistency Learning},
author={Zhedong Zhang and Liang Li and Gaoxiang Cong and Haibing YIN and Yuhan Gao and Chenggang Yan and Anton van den Hengel and Yuankai Qi},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=QHRNR64J1m}
} | Movie Dubbing aims to convert scripts into speeches that align with the given movie clip in both temporal and emotional aspects while preserving the vocal timbre of one brief reference audio.
The wide variations in emotion, pace, and environment that dubbed speech must exhibit to achieve real alignment make dubbing a complex task.
Considering the limited scale of the movie dubbing datasets (due to copyright) and the interference from background noise, directly learning from movie dubbing datasets limits the pronunciation quality of learned models.
To address this problem, we propose a two-stage dubbing method that allows the model to first learn pronunciation knowledge before practicing it in movie dubbing.
In the first stage, we introduce a multi-task approach to pre-train a phoneme encoder on a large-scale text-speech corpus for learning clear and natural phoneme pronunciations.
For the second stage, we devise a prosody consistency learning module to bridge the emotional expression with the phoneme-level dubbing prosody attributes (pitch and energy).
Finally, we design a duration consistency reasoning module to align the dubbing duration with the lip movement.
Extensive experiments demonstrate that our method outperforms several state-of-the-art methods on two primary benchmarks.
The source code and model checkpoints will be released to the public.
The demos are available at https://speaker2dubber.github.io/. | From Speaker to Dubber: Movie Dubbing with Prosody and Duration Consistency Learning | [
"Zhedong Zhang",
"Liang Li",
"Gaoxiang Cong",
"Haibing YIN",
"Yuhan Gao",
"Chenggang Yan",
"Anton van den Hengel",
"Yuankai Qi"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=QEJxw4hiFn | @inproceedings{
li2024generative,
title={Generative Multimodal Data Augmentation for Low-Resource Multimodal Named Entity Recognition},
author={Ziyan Li and Jianfei Yu and Jia Yang and Wenya Wang and Li Yang and Rui Xia},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=QEJxw4hiFn}
} | As an important task in multimodal information extraction, Multimodal Named Entity Recognition (MNER) has recently attracted considerable attention.
One key challenge of MNER lies in the lack of sufficient fine-grained annotated data, especially in low-resource scenarios.
Although data augmentation is a widely used technique to tackle the above issue, it is challenging to simultaneously generate synthetic text-image pairs and their corresponding high-quality entity annotations.
In this work, we propose a novel Generative Multimodal Data Augmentation (GMDA) framework for MNER, which contains two stages: Multimodal Text Generation and Multimodal Image Generation.
Specifically, we first transform each annotated sentence into a linearized labeled sequence, and then train a Label-aware Multimodal Large Language Model (LMLLM) to generate the labeled sequence based on a label-aware prompt and its associated image.
After using the trained LMLLM to generate synthetic labeled sentences, we further employ a Stable Diffusion model to generate the synthetic images that are semantically related to these sentences.
Experimental results on three benchmark datasets demonstrate the effectiveness of the proposed GMDA framework, which consistently boosts the performance of several competitive methods for two subtasks of MNER in both full-supervision and low-resource settings. | Generative Multimodal Data Augmentation for Low-Resource Multimodal Named Entity Recognition | [
"Ziyan Li",
"Jianfei Yu",
"Jia Yang",
"Wenya Wang",
"Li Yang",
"Rui Xia"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=QAQPBYMglm | @inproceedings{
chen2024sato,
title={{SATO}: Stable Text-to-Motion Framework},
author={Wenshuo Chen and Hongru Xiao and Erhang Zhang and Lijie Hu and Lei Wang and Mengyuan Liu and Chen Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=QAQPBYMglm}
} | Is the Text to Motion model robust? Recent advancements in Text-to-Motion models primarily stem from more accurate predictions of specific actions. However, the text modality typically relies solely on pre-trained Contrastive Language-Image Pretraining (CLIP) models. Our research has uncovered a significant issue with the text-to-motion model: its predictions often exhibit inconsistent outputs, resulting in vastly different or even incorrect poses when presented with semantically similar or identical text inputs. In this paper, we undertake an analysis to elucidate the underlying causes of this instability, establishing a clear link between the unpredictability of model outputs and the erratic attention patterns of the text encoder module. Consequently, we introduce a formal framework aimed at addressing this issue, which we term the Stable Text-to-Motion Framework (SATO). SATO consists of three modules, each dedicated to stable attention, stable prediction, and maintaining a balance between accuracy and robustness trade-off. We present a methodology for constructing an SATO that satisfies the stability of attention and prediction. To verify the stability of the model, we introduced a new textual synonym perturbation dataset based on HumanML3D and KIT-ML. Results show that SATO is significantly more stable against synonym and other slight perturbations while keeps its high accuracy performance. | SATO: Stable Text-to-Motion Framework | [
"Wenshuo Chen",
"Hongru Xiao",
"Erhang Zhang",
"Lijie Hu",
"Lei Wang",
"Mengyuan Liu",
"Chen Chen"
] | Conference | poster | 2405.01461 | [
"https://github.com/sato-team/stable-text-to-motion-framework"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=Q9gE1RbSHO | @inproceedings{
wang2024weakly,
title={Weakly Supervised Gaussian Contrastive Grounding with Large Multimodal Models for Video Question Answering},
author={Haibo Wang and Chenghang Lai and Yixuan Sun and Weifeng Ge},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Q9gE1RbSHO}
} | Video Question Answering (VideoQA) aims to answer natural language questions based on the information observed in videos. Despite the recent success of Large Multimodal Models (LMMs) in image-language understanding and reasoning, they deal with VideoQA insufficiently, by simply taking uniformly sampled frames as visual inputs, which ignores question-relevant visual clues. Moreover, there are no human annotations for question-critical timestamps in existing VideoQA datasets. In light of this, we propose a novel weakly supervised framework to enforce the LMMs to reason out the answers with question-critical moments as visual inputs. Specifically, we first fuse the question and answer pairs as event descriptions to find multiple keyframes as target moments and pseudo-labels, with the visual-language alignment capability of the CLIP models. With these pseudo-labeled keyframes as additionally weak supervision, we devise a lightweight Gaussian-based Contrastive Grounding (GCG) module. GCG learns multiple Gaussian functions to characterize the temporal structure of the video, and sample question-critical frames as positive moments to be the visual inputs of LMMs. Extensive experiments on several benchmarks verify the effectiveness of our framework, and we achieve substantial improvements compared to previous state-of-the-art methods. | Weakly Supervised Gaussian Contrastive Grounding with Large Multimodal Models for Video Question Answering | [
"Haibo Wang",
"Chenghang Lai",
"Yixuan Sun",
"Weifeng Ge"
] | Conference | poster | 2401.10711 | [
"https://github.com/whb139426/gcg"
] | https://huggingface.co/papers/2401.10711 | 1 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=Q5C9eNZl1m | @inproceedings{
ye2024flashspeech,
title={FlashSpeech: Efficient Zero-Shot Speech Synthesis},
author={Zhen Ye and Zeqian Ju and Haohe Liu and Xu Tan and Jianyi Chen and Yiwen Lu and Peiwen Sun and Jiahao Pan and Bianweizhen and Shulin He and Wei Xue and Qifeng Liu and Yike Guo},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Q5C9eNZl1m}
} | Recent progress in large-scale zero-shot speech synthesis has been significantly advanced by language models and diffusion models. However, the generation process of both methods is slow and computationally intensive. Efficient speech synthesis using a lower computing budget to achieve quality on par with previous work remains a significant challenge. In this paper, we present FlashSpeech, a large-scale zero-shot speech synthesis system with approximately 5\% of the inference time compared with previous work. FlashSpeech is built on the latent consistency model and applies a novel adversarial consistency training approach that can train from scratch without the need for a pre-trained diffusion model as the teacher. Furthermore, a new prosody generator module enhances the diversity of prosody, making the rhythm of the speech sound more natural.
The generation processes of FlashSpeech can be achieved efficiently with one or two sampling steps while maintaining high audio quality and high similarity to the audio prompt for zero-shot speech generation.
Our experimental results demonstrate the superior performance of FlashSpeech. Notably, FlashSpeech can be about 20 times faster than other zero-shot speech synthesis systems while maintaining comparable performance in terms of voice quality and similarity. Furthermore, FlashSpeech demonstrates its versatility by efficiently performing tasks like voice conversion, speech editing, and diverse speech sampling. Audio samples can be found in https://flashspeech.github.io | FlashSpeech: Efficient Zero-Shot Speech Synthesis | [
"Zhen Ye",
"Zeqian Ju",
"Haohe Liu",
"Xu Tan",
"Jianyi Chen",
"Yiwen Lu",
"Peiwen Sun",
"Jiahao Pan",
"Bianweizhen",
"Shulin He",
"Wei Xue",
"Qifeng Liu",
"Yike Guo"
] | Conference | poster | 2404.14700 | [
"https://github.com/zhenye234/CoMoSpeech"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=Q0IOmAwDqo | @inproceedings{
huang2024adaptive,
title={Adaptive Instance-wise Multi-view Clustering},
author={Shudong Huang and Hecheng Cai and Hao Dai and Wentao Feng and Jiancheng Lv},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Q0IOmAwDqo}
} | Multi-view clustering has garnered attention for its effectiveness in addressing heterogeneous data by unsupervisedly revealing underlying correlations between different views.
As a mainstream method, multi-view graph clustering has attracted increasing attention in recent years. Despite its success, it still has some limitations. Notably, many methods construct the similarity graph without considering the local geometric structure and exploit
coarse-grained complementary and consensus information from different views at the view level.
To solve the shortcomings, we focus on local structure consistency and fine-grained representations across multiple views. Specifically, each view's local consistency similarity graph is obtained through the adaptive neighbor. Subsequently, the multi-view similarity tensor is rotated and sliced into fine-grained instance-wise slices. Finally, these slices are fused into the final similarity matrix. Consequently, cross-view consistency can be captured by exploring the intersections of multiple views in an instance-wise manner. We design a collaborative framework with the augmented Lagrangian method to refine all subtasks towards optimal solutions iteratively. Extensive experiments on several multi-view datasets confirm the significant enhancement in clustering accuracy achieved by our method. | Adaptive Instance-wise Multi-view Clustering | [
"Shudong Huang",
"Hecheng Cai",
"Hao Dai",
"Wentao Feng",
"Jiancheng Lv"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=PsC0T9Ugak | @inproceedings{
xu2024wsel,
title={{WSEL}: {EEG} feature selection with weighted self-expression learning for incomplete multi-dimensional emotion recognition},
author={Xueyuan Xu and Li Zhuo and Jinxin Lu and Xia Wu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=PsC0T9Ugak}
} | Due to the small size of valid samples, multi-source EEG features with high dimensionality can easily cause problems such as overfitting and poor real-time performance of the emotion recognition classifier. Feature selection has been demonstrated as an effective means to solve these problems. Current EEG feature selection research assumes that all dimensions of emotional labels are complete. However, owing to the open acquisition environment, subjective variability, and border ambiguity of individual perceptions of emotion, the training data in the practical application often includes missing information, i.e., multi-dimensional emotional labels of several instances are incomplete. The aforementioned incomplete information directly restricts the accurate construction of the EEG feature selection model for multi-dimensional emotion recognition. To wrestle with the aforementioned problem, we propose a novel EEG feature selection model with weighted self-expression learning (WSEL). The model utilizes self-representation learning and least squares regression to reconstruct the label space through the second-order correlation and higher-order correlation within the multi-dimensional emotional labels and simultaneously realize the EEG feature subset selection under the incomplete information. We have utilized two multimedia-induced emotion datasets with EEG recordings, DREAMER and DEAP, to confirm the effectiveness of WSEL in the partial multi-dimensional emotional feature selection challenge. Compared to nine state-of-the-art feature selection approaches, the experimental results demonstrate that the EEG feature subsets chosen by WSEL can achieve optimal performance in terms of six performance metrics. | WSEL: EEG feature selection with weighted self-expression learning for incomplete multi-dimensional emotion recognition | [
"Xueyuan Xu",
"Li Zhuo",
"Jinxin Lu",
"Xia Wu"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Pq63G43jkQ | @inproceedings{
wang2024observe,
title={Observe before Generate: Emotion-Cause aware Video Caption for Multimodal Emotion Cause Generation in Conversations},
author={Fanfan Wang and Heqing Ma and Xiangqing Shen and Jianfei Yu and Rui Xia},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Pq63G43jkQ}
} | Emotion cause analysis has attracted increasing attention in recent years.
Extensive research has been dedicated to multimodal emotion recognition in conversations.
However, the integration of multimodal information with emotion cause remains underexplored.
Existing studies merely extract utterances or spans from conversations as cause evidence, which may not be concise and clear enough, especially the lack of explicit descriptions of other modalities, making it difficult to intuitively understand the causes.
To address these limitations, we introduce a new task named Multimodal Emotion Cause Generation in Conversations (MECGC), which aims to generate an abstractive summary describing the causes that trigger the given emotion based on the multimodal context of conversations.
We accordingly construct a dataset named ECGF that contains 1,374 conversations and 7,690 emotion instances from TV series.
We further develop a generative framework that first generates emotion-cause aware video captions (Observe) and then facilitates the generation of emotion causes (Generate).
The captioning model is trained with examples synthesized by a Multimodal Large Language Model (MLLM).
Experimental results demonstrate the effectiveness of our framework and the significance of multimodal information for emotion cause analysis. | Observe before Generate: Emotion-Cause aware Video Caption for Multimodal Emotion Cause Generation in Conversations | [
"Fanfan Wang",
"Heqing Ma",
"Xiangqing Shen",
"Jianfei Yu",
"Rui Xia"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Pnaq8Sd4Ql | @inproceedings{
xu2024selfsupervised,
title={Self-Supervised Emotion Representation Disentanglement for Speech-Preserving Facial Expression Manipulation},
author={Zhihua Xu and Tianshui Chen and Zhijing Yang and Chunmei Qing and Yukai Shi and Liang Lin},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Pnaq8Sd4Ql}
} | Speech-preserving Facial Expression Manipulation (SPFEM) aims to alter facial emotions in video content while preserving the facial movements associated with speech. Current works often fall short due to the inadequate representation of emotion as well as the absence of time-aligned paired data—two corresponding frames from the same speaker that showcase the same speech content but differ in emotional expression. In this work, we introduce a novel framework, Self-Supervised Emotion Representation Disentanglement (SSERD), to disentangle emotion representation for accurate emotion transfer while implementing a paired data construction module to facilitate automated, photorealistic facial animations. Specifically, We developed a module for learning emotion latent codes using StyleGAN's latent space, employing a cross-attention mechanism to extract and predict emotion editing codes, with contrastive learning to differentiate emotions. To overcome the lack of strictly paired data in the SPFEM task, we exploit pretrained StyleGAN to generate paired data, focusing on expression vectors unrelated to mouth shape. Additionally, we employed a hybrid training strategy using both synthetic paired and real unpaired data to enhance the realism of SPFEM model's generated images. Extensive experiments conducted on benchmark datasets, including MEAD and RAVDESS, have validated the effectiveness of our framework, demonstrating its superior capability in generating photorealistic and expressive facial animations. | Self-Supervised Emotion Representation Disentanglement for Speech-Preserving Facial Expression Manipulation | [
"Zhihua Xu",
"Tianshui Chen",
"Zhijing Yang",
"Chunmei Qing",
"Yukai Shi",
"Liang Lin"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Pmtk0LWaXu | @inproceedings{
yuan2024vrdistill,
title={{VRD}istill: Vote Refinement Distillation for Efficient Indoor 3D Object Detection},
author={Ze Yuan and Jinyang Guo and Dakai An and Junran Wu and He Zhu and Jianhao Li and Xueyuan Chen and Ke Xu and Jiaheng Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Pmtk0LWaXu}
} | Recently, indoor 3D object detection has shown impressive progress. However, these improvements have come at the cost of increased memory consumption and longer inference times, making it difficult to apply these methods in practical scenarios. To address this issue, knowledge distillation has emerged as a promising technique for model acceleration. In this paper, we propose the VRDistill framework, the first knowledge distillation framework designed for efficient indoor 3D object detection. Our VRDistill framework includes a refinement module and a soft foreground mask operation to enhance the quality of the distillation. The refinement module utilizes trainable layers to improve the quality of the teacher's votes, while the soft foreground mask operation focuses on foreground votes, further enhancing the distillation performance. Comprehensive experiments on the ScanNet and SUN-RGBD datasets demonstrate the effectiveness and generalization ability of our VRDistill framework. | VRDistill: Vote Refinement Distillation for Efficient Indoor 3D Object Detection | [
"Ze Yuan",
"Jinyang Guo",
"Dakai An",
"Junran Wu",
"He Zhu",
"Jianhao Li",
"Xueyuan Chen",
"Ke Xu",
"Jiaheng Liu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=PkW68qL47n | @inproceedings{
guo2024openvocabulary,
title={Open-Vocabulary Audio-Visual Semantic Segmentation},
author={Ruohao Guo and Liao Qu and Dantong Niu and Yanyu Qi and Wenzhen Yue and Ji Shi and Bowei Xing and Xianghua Ying},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=PkW68qL47n}
} | Audio-visual semantic segmentation (AVSS) aims to segment and classify sounding objects in videos with acoustic cues. However, most approaches operate on the close-set assumption and only identify pre-defined categories from training data, lacking the generalization ability to detect novel categories in practical applications. In this paper, we introduce a new task: \textbf{open-vocabulary audio-visual semantic segmentation}, extending AVSS task to open-world scenarios beyond the annotated label space. This is a more challenging task that requires recognizing all categories, even those that have never been seen nor heard during training. Moreover, we propose the first open-vocabulary AVSS framework, OV-AVSS, which mainly consists of two parts: 1) a universal sound source localization module to perform audio-visual fusion and locate all potential sounding objects and 2) an open-vocabulary classification module to predict categories with the help of the prior knowledge from large-scale pre-trained vision-language models. To properly evaluate the open-vocabulary AVSS, we split zero-shot training and testing subsets based on the AVSBench-semantic benchmark, namely AVSBench-OV. Extensive experiments demonstrate the strong segmentation and zero-shot generalization ability of our model on all categories. On the AVSBench-OV dataset, OV-AVSS achieves 55.43% mIoU on base categories and 29.14% mIoU on novel categories, exceeding the state-of-the-art zero-shot method by 41.88%/20.61% and open-vocabulary method by 10.2%/11.6%. | Open-Vocabulary Audio-Visual Semantic Segmentation | [
"Ruohao Guo",
"Liao Qu",
"Dantong Niu",
"Yanyu Qi",
"Wenzhen Yue",
"Ji Shi",
"Bowei Xing",
"Xianghua Ying"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Peu0lDzBw6 | @inproceedings{
kim2024learnable,
title={Learnable Negative Proposals Using Dual-Signed Cross-Entropy Loss for Weakly Supervised Video Moment Localization},
author={Sunoh Kim and Daeho Um and Hyunjun Choi and Jin young Choi},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Peu0lDzBw6}
} | Most existing methods for weakly supervised video moment localization use rule-based negative proposals. However, the rule-based ones have a limitation in capturing various confusing locations throughout the entire video. To alleviate the limitation, we propose learning-based negative proposals which are trained using a dual-signed cross-entropy loss. The dual-signed cross-entropy loss is controlled by a weight that changes gradually from a minus value to a plus one. The minus value makes the negative proposals be trained to capture query-irrelevant temporal boundaries (easy negative) in the earlier training stages, whereas the plus one makes them capture somewhat query-relevant temporal boundaries (hard negative) in the later training stages. To evaluate the quality of negative proposals, we introduce a new evaluation metric to measure how well a negative proposal captures a poorly-generated positive proposal. We verify that our negative proposals can be applied with negligible additional parameters and inference costs, achieving state-of-the-art performance on three public datasets. | Learnable Negative Proposals Using Dual-Signed Cross-Entropy Loss for Weakly Supervised Video Moment Localization | [
"Sunoh Kim",
"Daeho Um",
"Hyunjun Choi",
"Jin young Choi"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=PbiXiSzlj9 | @inproceedings{
qu2024goi,
title={{GOI}: Find 3D Gaussians of Interest with an Optimizable Open-vocabulary Semantic-space Hyperplane},
author={Yansong Qu and Shaohui Dai and Xinyang Li and Jianghang Lin and Liujuan Cao and Shengchuan Zhang and Rongrong Ji},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=PbiXiSzlj9}
} | 3D open-vocabulary scene understanding, crucial for advancing augmented reality and robotic applications, involves interpreting and locating specific regions within a 3D space as directed by natural language instructions.
To this end, we introduce GOI, a framework that integrates semantic features from 2D vision-language foundation models into 3D Gaussian Splatting (3DGS) and identifies 3D Gaussians of Interest using an Optimizable Semantic-space Hyperplane.
Our approach includes an efficient compression method that utilizes scene priors to condense noisy high-dimensional semantic features into compact low-dimensional vectors, which are subsequently embedded in 3DGS.
During the open-vocabulary querying process, we adopt a distinct approach compared to existing methods, which depend on a manually set fixed empirical threshold to select regions based on their semantic feature distance to the query text embedding. This traditional approach often lacks universal accuracy, leading to challenges in precisely identifying specific target areas. Instead, our method treats the feature selection process as a hyperplane division within the feature space, retaining only those features that are highly relevant to the query. We leverage off-the-shelf 2D Referring Expression Segmentation (RES) models to fine-tune the semantic-space hyperplane, enabling a more precise distinction between target regions and others. This fine-tuning substantially improves the accuracy of open-vocabulary queries, ensuring the precise localization of pertinent 3D Gaussians.
Extensive experiments demonstrate GOI's superiority over previous state-of-the-art methods. | GOI: Find 3D Gaussians of Interest with an Optimizable Open-vocabulary Semantic-space Hyperplane | [
"Yansong Qu",
"Shaohui Dai",
"Xinyang Li",
"Jianghang Lin",
"Liujuan Cao",
"Shengchuan Zhang",
"Rongrong Ji"
] | Conference | poster | 2405.17596 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=PTYL6011vp | @inproceedings{
hu2024novachart,
title={NovaChart: A Large-scale Dataset towards Chart Understanding and Generation of Multimodal Large Language Models},
author={Linmei Hu and Duokang Wang and Yiming Pan and Jifan Yu and Yingxia Shao and Chong Feng and Liqiang Nie},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=PTYL6011vp}
} | Multimodal Large Language Models (MLLMs) have shown significant potential for chart understanding and generation.
However, they are still far from achieving the desired effectiveness in practical applications. This could be due to the limitations of the used training chart data. Existing chart datasets suffer from scarcity of chart types, limited coverage of tasks, and insufficient scalability, making them incapable of effectively enhancing the chart-related capabilities of MLLMs. To tackle these obstacles, we construct NovaChart, a large-scale dataset for chart understanding and generation of MLLMs. NovaChart contains 47K high-resolution chart images and 856K chart-related instructions, covering 18 different chart types and 15 unique tasks of chart understanding and generation. To build NovaChart, we propose a data generation engine for metadata curation, chart visualization and instruction formulation. Chart metadata in NovaChart contains detailed annotations, i.e., data points, visual elements, source data and the visualization code of every chart. This additional information endows NovaChart with considerable scalability, as it can facilitate the extension of chart instruction data to a larger scale and greater diversity. We utilize NovaChart to train several open-source MLLMs. Experimental results demonstrate NovaChart empowers MLLMs with stronger capabilities in 15 chart understanding and generation tasks by a large-margin (35.47\%-619.47\%), bringing them a step closer to smart chart assistants. Our dataset is now available at https://github.com/Elucidator-V/NovaChart. | NovaChart: A Large-scale Dataset towards Chart Understanding and Generation of Multimodal Large Language Models | [
"Linmei Hu",
"Duokang Wang",
"Yiming Pan",
"Jifan Yu",
"Yingxia Shao",
"Chong Feng",
"Liqiang Nie"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=PPmOOMhMOW | @inproceedings{
jiang2024prior,
title={Prior Knowledge Integration via {LLM} Encoding and Pseudo Event Regulation for Video Moment Retrieval},
author={Yiyang Jiang and Wengyu Zhang and Xulu Zhang and Xiaoyong Wei and Chang Wen Chen and Qing Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=PPmOOMhMOW}
} | In this paper, we investigate the feasibility of leveraging large language models (LLMs) for integrating general knowledge and incorporating pseudo-events as priors for temporal content distribution in video moment retrieval (VMR) models. The motivation behind this study arises from the limitations of using LLMs as decoders for generating discrete textual descriptions, which hinders their direct application to continuous outputs like salience scores and inter-frame embeddings that capture inter-frame relations. To overcome these limitations, we propose utilizing LLM encoders instead of decoders. Through a feasibility study, we demonstrate that LLM encoders effectively refine inter-concept relations in multimodal embeddings, even without being trained on textual embeddings. We also show that the refinement capability of LLM encoders can be transferred to other embeddings, such as BLIP and T5, as long as these embeddings exhibit similar inter-concept similarity patterns to CLIP embeddings. We present a general framework for integrating LLM encoders into existing VMR architectures, specifically within the fusion module. The LLM encoder's ability to refine concept relation can help the model to achieve a balanced understanding of the foreground concepts (e.g., persons, faces) and background concepts (e.g., street, mountains) rather focusing only on the visually dominant foreground concepts. Additionally, we introduce the concept of pseudo-events, obtained through event detection techniques, to guide the prediction of moments within event boundaries instead of crossing them, which can effectively avoid the distractions from adjacent moments. The integration of semantic refinement using LLM encoders and pseudo-event regulation is designed as plug-in components that can be incorporated into existing VMR methods within the general framework. Through experimental validation, we demonstrate the effectiveness of our proposed methods by achieving state-of-the-art performance in VMR. The source code can be accessed at https://github.com/open_upon_acceptance. | Prior Knowledge Integration via LLM Encoding and Pseudo Event Regulation for Video Moment Retrieval | [
"Yiyang Jiang",
"Wengyu Zhang",
"Xulu Zhang",
"Xiaoyong Wei",
"Chang Wen Chen",
"Qing Li"
] | Conference | oral | 2407.15051 | [
"https://github.com/fletcherjiang/llmepet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=PPJZV6XJL8 | @inproceedings{
mamta2024aspectbased,
title={Aspect-Based Multimodal Mining: Unveiling Sentiments, Complaints, and Beyond in User-Generated Content},
author={Mamta Mamta and gopendra Vikram singh and Deepak Raju Kori and Asif Ekbal},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=PPJZV6XJL8}
} | Sentiment analysis and complaint identification are key tools in mining user preferences by measuring the polarity and breach of expectations. Recent works on complaint identification identify aspect categories and classify them into complaint or not-complaint classes. However, aspect category-based complaint identification provides high-level information about the features of products.
In addition, it is also observed that the user sometimes does not complain about a specific aspect but expresses concern about specific aspects in a respectful way. Currently, uni-modal and multimodal studies do not differentiate between this thin line between complaint and concern.
In this work, we propose the task of multimodal aspect term-based analysis beyond sentiments and complaints. It comprises of two sub-tasks, \textit{viz} (i) classification of the given aspect term into one of the four classes, \textit{viz.} praise, concern, complaint, and others, (ii) identification of the cause of praise, concern, and complaint classes. We propose a first benchmark explainable multimodal corpus annotated for aspect term-based complaints, praises, concerns, their corresponding causes, and sentiments. Further, we propose an effective technique for the joint learning of aspect term-based complaint/concern/praise identification and cause extraction tasks (primary tasks) where sentiment analysis is used as a secondary task to assist primary tasks and establish them as baselines for further research in this direction. Sample dataset has been made available at: \url{https://anonymous.4open.science/r/MAspectX-327E/README.md} The whole dataset will be made publicly available for research after acceptance of the paper. | Aspect-Based Multimodal Mining: Unveiling Sentiments, Complaints, and Beyond in User-Generated Content | [
"Mamta Mamta",
"gopendra Vikram singh",
"Deepak Raju Kori",
"Asif Ekbal"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=PJpoCWx4qG | @inproceedings{
yao2024decoupling,
title={Decoupling Heterogeneous Features for Robust 3D Interacting Hand Poses Estimation},
author={Huan Yao and Changxing Ding and Xuanda Xu and Zhifeng Lin},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=PJpoCWx4qG}
} | Estimating the 3D poses of interacting hands from a monocular image is challenging due to the similarity in appearance between hand parts. Therefore, utilizing the appearance features alone tends to result in unreliable pose estimation. Existing approaches directly fuse the appearance features with position features, ignoring that the two types of features are heterogeneous. Here, the appearance features are derived from the RGB values of pixels, while the position features are mapped from the coordinates of pixels or joints. To address this problem, we present a novel framework called \textbf{D}ecoupled \textbf{F}eature \textbf{L}earning (\textbf{DFL}) for 3D pose estimation of interacting hands. By decoupling the appearance and position features, we facilitate the interactions within each feature type and those between both types of features. First, we compute the appearance relationships between the joint queries and the image feature maps; we utilize these relationships to aggregate each joint's appearance and position features. Second, we compute the 3D spatial relationships between hand joints using their position features; we utilize these relationships to guide the feature enhancement of joints. Third, we calculate appearance relationships and spatial relationships between the joints and image using the appearance and position features, respectively; we utilize these complementary relationships to promote the joints' location in the image. The two processes mentioned above are conducted iteratively. Finally, only the refined position features are used for hand pose estimation. This strategy avoids the step of mapping heterogeneous appearance features to hand-joint positions. Our method significantly outperforms state-of-the-art methods on the large-scale InterHand2.6M dataset. More impressively, our method exhibits strong generalization ability on in-the-wild images. The code will be released. | Decoupling Heterogeneous Features for Robust 3D Interacting Hand Poses Estimation | [
"Huan Yao",
"Changxing Ding",
"Xuanda Xu",
"Zhifeng Lin"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=PBPeqABctr | @inproceedings{
zhu2024enhancing,
title={Enhancing Model Interpretability with Local Attribution over Global Exploration},
author={Zhiyu Zhu and Zhibo Jin and Jiayu Zhang and Huaming Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=PBPeqABctr}
} | In the field of artificial intelligence, AI models are frequently described as ‘black boxes’ due to the obscurity of their internal mechanisms. It has ignited research interest on model interpretability, especially in attribution methods that offers precise explanations of model decisions. Current attribution algorithms typically evaluate the importance of each parameter by exploring the sample space. A large number of intermediate states are introduced during the exploration process, which may reach the model’s Out-of-Distribution (OOD) space. Such intermediate states will impact the attribution results, making it challenging to grasp the relative importance of features. In this paper, we firstly define the local space and its relevant properties, and we propose the Local Attribution (LA) algorithm that leverages these properties. The LA algorithm comprises both targeted and untargeted exploration phases, which are designed to effectively generate intermediate states for attribution that thoroughly encompass the local space. Compared to the state-of-the-art attribution methods, our approach achieves an average improvement of 38.21% in attribution effectiveness. Extensive ablation studies within our experiments also validate the significance of each component in our algorithm. Our code is available at: https://anonymous.4open.science/r/LA-2024 | Enhancing Model Interpretability with Local Attribution over Global Exploration | [
"Zhiyu Zhu",
"Zhibo Jin",
"Jiayu Zhang",
"Huaming Chen"
] | Conference | poster | 2408.07736 | [
"https://github.com/lmbtough/la"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=PATBoYAHib | @inproceedings{
yan2024trackingforced,
title={Tracking-forced Referring Video Object Segmentation},
author={Ruxue Yan and wenya guo and XuBo Liu and Xumeng Liu and Ying Zhang and Xiaojie Yuan},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=PATBoYAHib}
} | Referring video object segmentation (RVOS) is a cross-modal task that aims to segment the target object described by language expressions. A video typically consists of multiple frames and existing works conduct segmentation at either the clip-level or the frame-level. Clip-level methods process a clip at once and segment in parallel, lacking explicit inter-frame interactions. In contrast, frame-level methods facilitate direct interactions between frames by processing videos frame by frame, but they are prone to error accumulation. In this paper, we propose a novel tracking-forced framework, introducing high-quality tracking information and forcing the model to achieve accurate segmentation. Concretely, we utilize the ground-truth segmentation of previous frames as accurate inter-frame interactions, providing high-quality tracking references for object segmentation in the next frame. This decouples the current input from the previous output, which enables our model to concentrate on accurately segmenting just based on given tracking information, improving training efficiency and preventing error accumulation. For the inference stage without ground-truth masks, we carefully select the beginning frame to construct tracking information, aiming to ensure accurate tracking-based frame-by-frame object segmentation. With these designs, our tracking-forced method significantly outperforms existing methods on 4 widely used benchmarks by at least 3%. Especially, our method achieves 88.3% [email protected] accuracy and 87.6 overall IoU score on the JHMDB-Sentences dataset, surpassing previous best methods by 5.0% and 8.0, respectively. | Tracking-forced Referring Video Object Segmentation | [
"Ruxue Yan",
"wenya guo",
"XuBo Liu",
"Xumeng Liu",
"Ying Zhang",
"Xiaojie Yuan"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=PARvVuFikr | @inproceedings{
zhang2024effective,
title={Effective optimization of root selection towards improved explanation of deep classifiers},
author={Xin Zhang and Sheng-hua Zhong and Jianmin Jiang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=PARvVuFikr}
} | Explaining what part of the input images primarily contributed to the predicted classification results by deep models has been widely researched over the years and many effective methods have been reported in the literature, for which deep Taylor decomposition (DTD) served as the primary foundation due to its advantage in theoretical explanations brought in by Taylor expansion and approximation. Recent research, however, has shown that the root of Taylor decomposition could extend beyond local linearity, and thus causing DTD to fail in delivering expected performances. In this paper, we propose a universal root inference method to overcome the shortfall and strengthen the roles of DTD in explainability and interpretability of deep classifications. In comparison with the existing approaches, our proposed features in: (i) theoretical establishment of the relationship between ideal roots and the propagated relevances; (ii) exploitation of gradient descents in learning a universal root inference; and (iii) constrained optimization of its final root selection. Extensive experiments, including both quantitative and qualitative, validate that our proposed root inference is not only effective, but also delivers significantly improved performances in explaining a range of deep classifiers. | Effective optimization of root selection towards improved explanation of deep classifiers | [
"Xin Zhang",
"Sheng-hua Zhong",
"Jianmin Jiang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=P7h8GjANzW | @inproceedings{
shi2024fewshot,
title={Few-shot Semantic Segmentation via Perceptual Attention and Spatial Control},
author={Guangchen Shi and Wei Zhu and Yirui Wu and Danhuai Zhao and Kang Zheng and Tong Lu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=P7h8GjANzW}
} | Few-shot semantic segmentation (FSS) aims to locate pixels of unseen classes with clues from a few labeled samples. Recently, thanks to profound prior knowledge, diffusion models have been expanded to achieve FSS tasks. However, due to probabilistic noising and denoising processes, it is difficult for them to maintain spatial relationships between inputs and outputs, leading to inaccurate segmentation masks. To address this issue, we propose a Diffusion-based Segmentation network (DiffSeg), which decouples probabilistic denoising and segmentation processes. Specifically, DiffSeg leverages attention maps extracted from a pretrained diffusion model as support-query interaction information to guide segmentation, which mitigates the impact of probabilistic processes while benefiting from rich prior knowledge of diffusion models. In the segmentation stage, we present a Perceptual Attention Module (PAM), where two cross-attention mechanisms capture semantic information of support-query interaction and spatial information produced by the pretrained diffusion model. Furthermore, a self-attention mechanism within PAM ensures a balanced dependence for segmentation, thus preventing inconsistencies between the aforementioned semantic and spatial information. Additionally, considering the uncertainty inherent in the generation process of diffusion models, we equip DiffSeg with a Spatial Control Module (SCM), which models spatial structural information of query images to control boundaries of attention maps, thus aligning the spatial location between knowledge representation and query images. Experiments on PASCAL-5$^i$ and COCO datasets show that DiffSeg achieves new state-of-the-art performance with remarkable advantages. | Few-shot Semantic Segmentation via Perceptual Attention and Spatial Control | [
"Guangchen Shi",
"Wei Zhu",
"Yirui Wu",
"Danhuai Zhao",
"Kang Zheng",
"Tong Lu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=P7YgBpXf90 | @inproceedings{
xie2024generalizing,
title={Generalizing {ISP} Model by Unsupervised Raw-to-raw Mapping},
author={Dongyu Xie and Chaofan Qiao and Lanyue Liang and Zhiwen Wang and Tianyu Li and Qiao Liu and Chongyi Li and Guoqing Wang and Yang Yang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=P7YgBpXf90}
} | ISP (Image Signal Processor) serves as a pipeline converting unprocessed raw images to sRGB images, positioned before nearly all visual tasks. Due to the varying spectral sensitivities of cameras, raw images captured by different cameras exist in different color spaces, making it challenging to deploy ISP across cameras with consistent performance. To address this challenge, it is intuitively to incorporate a raw-to-raw mapping (mapping raw images across camera color spaces) module into the ISP. However, the lack of paired data (i.e., images of the same scene captured by different cameras) makes it difficult to train a raw-to-raw model using supervised learning methods. In this paper, we aim to achieve ISP generalization by proposing the first unsupervised raw-to-raw model. To be specific, we propose a CSTPP (Color Space Transformation Parameters Predictor) module to predict the space transformation parameters in a patch-wise manner, which can accurately perform color space transformation and flexibly manage complex lighting conditions. Additionally, we design a CycleGAN-style training framework to realize unsupervised learning, overcoming the deficiency of paired data. Our proposed unsupervised model achieved performance comparable to that of the state-of-the-art semi-supervised method in raw-to-raw task. Furthermore, to assess its ability to generalize the ISP model across different cameras, we for the first formulated cross-camera ISP task and demonstrated the performance of our method through extensive experiments. Codes will be publicly available. | Generalizing ISP Model by Unsupervised Raw-to-raw Mapping | [
"Dongyu Xie",
"Chaofan Qiao",
"Lanyue Liang",
"Zhiwen Wang",
"Tianyu Li",
"Qiao Liu",
"Chongyi Li",
"Guoqing Wang",
"Yang Yang"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=P66QvdCPjE | @inproceedings{
wang2024new,
title={New Job, New Gender? Measuring the Social Bias in Image Generation Models},
author={Wenxuan Wang and Haonan Bai and Jen-tse Huang and Yuxuan WAN and Youliang Yuan and Haoyi Qiu and Nanyun Peng and Michael Lyu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=P66QvdCPjE}
} | Image generation models can generate or edit images from a given text. Recent advancements in image generation technology, exemplified by DALL-E and Midjourney, have been groundbreaking. These advanced models, despite their impressive capabilities, are often trained on massive Internet datasets, making them susceptible to generating content that perpetuates social stereotypes and biases, which can lead to severe consequences. Prior research on assessing bias within image generation models suffers from several shortcomings, including limited accuracy, reliance on extensive human labor, and lack of comprehensive analysis. In this paper, we propose BiasPainter, a novel evaluation framework that can accurately, automatically and comprehensively trigger social bias in image generation models. BiasPainter uses a diverse range of seed images of individuals and prompts the image generation models to edit these images using gender, race, and age-neutral queries. These queries span 62 professions, 39 activities, 57 types of objects, and 70 personality traits. The framework then compares the edited images to the original seed images, focusing on the significant changes related to gender, race, and age. BiasPainter adopts a key insight that these characteristics should not be modified when subjected to neutral prompts. Built upon this design, BiasPainter can trigger the social bias and evaluate the fairness of image generation models.
We use BiasPainter to evaluate six widely-used image generation models, such as stable diffusion and Midjourney. Experimental results show that BiasPainter can successfully trigger social bias in image generation models. According to our human evaluation, BiasPainter can achieve 90.8\% accuracy on automatic bias detection, which is significantly higher than the results reported in previous work. All the code, data, and experimental results will be released to facilitate future research. | New Job, New Gender? Measuring the Social Bias in Image Generation Models | [
"Wenxuan Wang",
"Haonan Bai",
"Jen-tse Huang",
"Yuxuan WAN",
"Youliang Yuan",
"Haoyi Qiu",
"Nanyun Peng",
"Michael Lyu"
] | Conference | oral | 2401.00763 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=P1n5GLMxde | @inproceedings{
ma2024addg,
title={{ADDG}: An Adaptive Domain Generalization Framework for Cross-Plane {MRI} Segmentation},
author={Zibo Ma and Bo Zhang and Zheng Zhang and Wu Liu and Wufan Wang and Hui Gao and Wendong Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=P1n5GLMxde}
} | Multi-planar and multi-slice magnetic resonance imaging (MRI) can provide more comprehensive 3D structural information for disease diagnosis. However, compared to multi-source MRI, multi-planar MRI uses almost the same scanning parameters but scans different internal structures. This atypical domain difference may lead to poor performance of traditional domain generalization methods in handling multi-planar MRI, especially when MRI from different planes also comes from different sources. In this paper, we introduce ADDG, an Adaptive Domain Generalization Framework tailored for accurate cross-plane MRI segmentation. ADDG significantly mitigates the impact of information loss caused by slice spacing by incorporating 3D shape constraints of the segmentation target, and better clarifies the feature differences between different planes of data through adaptive data partitioning strategy. Specifically, we propose a mesh deformation-based organ segmentation network to simultaneously delineate the 2D boundary and 3D mask of the prostate, as well as to guide more accurate mesh deformation. We also develop an organ-specific mesh template and employ Loop subdivision for unpooling new vertices to a triangular mesh to guide the mesh deformation task, resulting in smoother organ shapes. Furthermore, we design a flexible meta-learning paradigm that adaptively partitions data domains based on invariant learning, which can learn domain invariant features from multi-source training sets to further enhance the generalization ability of the model. Experimental results show that our approach outperforms several medical image segmentation, single-planar-based 3D shape reconstruction, and domain generalization methods. | ADDG: An Adaptive Domain Generalization Framework for Cross-Plane MRI Segmentation | [
"Zibo Ma",
"Bo Zhang",
"Zheng Zhang",
"Wu Liu",
"Wufan Wang",
"Hui Gao",
"Wendong Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Oq8AYY6qTY | @inproceedings{
dai2024axiomvision,
title={AxiomVision: Accuracy-Guaranteed Adaptive Visual Model Selection for Perspective-Aware Video Analytics},
author={Xiangxiang DAI and Zeyu Zhang and Peng Yang and Yuedong Xu and Xutong Liu and John C.S. Lui},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Oq8AYY6qTY}
} | The rapid evolution of multimedia and computer vision technologies requires adaptive visual model deployment strategies to effectively handle diverse tasks and varying environments. This work introduces \textit{AxiomVision}, a novel framework that can guarantee accuracy by leveraging edge computing to dynamically select the most efficient visual models for video analytics under diverse scenarios. Utilizing a tiered edge-cloud architecture, \textit{AxiomVision} enables the deployment of a broad spectrum of visual models, from lightweight to complex DNNs, that can be tailored to specific scenarios while considering camera source impacts. In addition, \textit{AxiomVision} provides three core innovations: (1) a dynamic visual model selection mechanism utilizing continual online learning, (2) an efficient online method that efficiently takes into account the influence of the camera's perspective, and (3) a topology-driven grouping approach that accelerates the model selection process. With rigorous theoretical guarantees, these advancements provide a scalable and effective solution for visual tasks inherent to multimedia systems, such as object detection, classification, and counting. Empirically, \textit{AxiomVision} achieves a 25.7\% improvement in accuracy. | AxiomVision: Accuracy-Guaranteed Adaptive Visual Model Selection for Perspective-Aware Video Analytics | [
"Xiangxiang DAI",
"Zeyu Zhang",
"Peng Yang",
"Yuedong Xu",
"Xutong Liu",
"John C.S. Lui"
] | Conference | poster | 2407.20124 | [
"https://github.com/zeyuzhangzyz/axiomvision"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=OmymuhXWcn | @inproceedings{
liu2024crosstask,
title={Cross-Task Knowledge Transfer for Semi-supervised Joint 3D Grounding and Captioning},
author={Yang Liu and Daizong Liu and Zongming Guo and Wei Hu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=OmymuhXWcn}
} | 3D visual grounding is a fundamental but important task in multimedia understanding, which aims to locate a specific object in a complicated 3D scene semantically according to a text description. However, this task requires a large number of annotations of labeled text-object pairs for training, and the scarcity of annotated data has been a key obstacle in this task. To this end, this paper makes the first attempt to introduce and address a new semi-supervised setting, where only a few text-object labels are provided during training. Considering most scene data has no annotation, we explore a new solution for unlabeled 3D grounding by additionally training and transferring samples from a correlated task, i.e., 3D captioning. Our main insight is that 3D grounding and captioning are complementary and can be iteratively trained with unlabeled data to provide object and text contexts for each other with pseudo-label learning. Specifically, we propose a novel 3D Cross-Task Teacher-Student Framework (3D-CTTSF) for joint 3D grounding and captioning in the semi-supervised setting, where each branch contains parallel grounding and captioning modules. We first pre-train the two modules of the teacher branch with the limited labeled data for warm-up. Then, we train the student branch to mimic the ability of the teacher model and iteratively update both branches with the unlabeled data. In particular, we transfer the learned knowledge between the grounding and captioning modules across two branches to generate and refine the pseudo labels of unlabeled data for providing reliable supervision. To further improve the pseudo-label quality, we design a cross-task pseudo-label generation scheme, filtering low-quality pseudo-labels at the detection, captioning, and grounding levels, respectively. Experimental results on various datasets show competitive performances in both tasks compared to previous fully- and weakly-supervised methods, demonstrating the proposed 3D-CTTSF can serve as an effective solution to overcome the data scarcity issue. | Cross-Task Knowledge Transfer for Semi-supervised Joint 3D Grounding and Captioning | [
"Yang Liu",
"Daizong Liu",
"Zongming Guo",
"Wei Hu"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=OimiGJDpnW | @inproceedings{
liu2024dynamic,
title={Dynamic Evidence Decoupling for Trusted Multi-view Learning},
author={Ying Liu and Lihong Liu and Cai Xu and Xiangyu Song and Ziyu Guan and Wei Zhao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=OimiGJDpnW}
} | Multi-view learning methods often focus on improving decision accuracy, while neglecting the decision uncertainty, limiting their suitability for safety-critical applications. To mitigate this, researchers propose trusted multi-view learning methods that estimate classification probabilities and uncertainty by learning the class distributions for each instance. However, these methods assume that the data from each view can effectively differentiate all categories, ignoring the semantic vagueness phenomenon in real-world multi-view data. Our findings demonstrate that this phenomenon significantly suppresses the learning of view-specific evidence in existing methods. We propose a Consistent and Complementary-aware trusted Multi-view Learning (CCML) method to solve this problem. We first construct view-opinions using evidential deep neural networks, which consist of belief mass vectors and uncertainty estimates. Next, we dynamically decouple the consistent and complementary evidence. The consistent evidence is derived from the shared portions across all views, while the complementary evidence is obtained by averaging the differing portions across all views. We ensure that the opinion constructed from the consistent evidence strictly aligns with the ground-truth category. For the opinion constructed from the complementary evidence, we only require it to reflect the probability of the true category, allowing for potential vagueness in the evidence. We compare CCML with state-of-the-art baselines on one synthetic and six real-world datasets. The results validate the effectiveness of the dynamic evidence decoupling strategy and show that CCML significant outperforms baselines on accuracy and reliability. We promise to release the code and all datasets on GitHub and show the link here. | Dynamic Evidence Decoupling for Trusted Multi-view Learning | [
"Ying Liu",
"Lihong Liu",
"Cai Xu",
"Xiangyu Song",
"Ziyu Guan",
"Wei Zhao"
] | Conference | oral | 2410.03796 | [
"https://github.com/lihong-liu/ccml"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=OiPyYoPhAd | @inproceedings{
ru2024parameterefficient,
title={Parameter-Efficient Complementary Expert Learning for Long-Tailed Visual Recognition},
author={Lixiang Ru and Xin Guo and Lei Yu and Yingying Zhang and Jiangwei Lao and Jian Wang and Jingdong Chen and Yansheng Li and Ming Yang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=OiPyYoPhAd}
} | Long-tailed recognition (LTR) aims to learn balanced models from extremely unbalanced training data. Fine-tuning pretrained foundation models has recently emerged as a promising research direction for LTR. However, we observe that the fine-tuning process tends to degrade the intrinsic representation capability of pretrained models and lead to model bias towards certain classes, thereby hindering the overall recognition performance. To unleash the intrinsic representation capability of pretrained foundation models, in this work, we propose a new Parameter-Efficient Complementary Expert Learning (PECEL) for LTR. Specifically, PECEL consists of multiple experts, where individual experts are trained via Parameter-Efficient Fine-Tuning (PEFT) and encouraged to learn different expertise on complementary sub-categories via a new sample-aware logit adjustment loss. By aggregating the predictions of different experts, PECEL effectively achieves a balanced performance on long-tailed classes. Nevertheless, learning multiple experts generally introduces extra trainable parameters. To ensure parameter efficiency, we further propose a parameter sharing strategy which decomposes and shares the parameters in each expert. Extensive experiments on 4 LTR benchmarks show that the proposed PECEL can effectively learn multiple complementary experts without increasing the trainable parameters and achieve new state-of-the-art performance. | Parameter-Efficient Complementary Expert Learning for Long-Tailed Visual Recognition | [
"Lixiang Ru",
"Xin Guo",
"Lei Yu",
"Yingying Zhang",
"Jiangwei Lao",
"Jian Wang",
"Jingdong Chen",
"Yansheng Li",
"Ming Yang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=OhQLdb4seO | @inproceedings{
liu2024echoaudio,
title={EchoAudio: Efficient and High-Quality Text-to-Audio Generation with Minimal Inference Steps},
author={Huadai Liu and Rongjie Huang and Yang Liu and Hengyuan Cao and Jialei Wang and Xize Cheng and Siqi Zheng and Zhou Zhao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=OhQLdb4seO}
} | Recent advancements in Latent Diffusion Models (LDMs) have propelled them to the forefront of various generative tasks. However, their iterative sampling process poses a significant computational burden, resulting in slow generation speeds and limiting their application in text-to-audio generation deployment. In this work, we introduce AudioLCM, a novel consistency-based model tailored for efficient and high-quality text-to-audio generation. Unlike prior approaches that address noise removal through iterative processes, AudioLCM integrates Consistency Models (CMs) into the generation process, facilitating rapid inference through a mapping from any point at any time step to the trajectory's initial point. To overcome the convergence issue inherent in LDMs with reduced sample iterations, we propose the Guided Latent Consistency Distillation with a multi-step Ordinary Differential Equation (ODE) solver. This innovation shortens the time schedule from thousands to dozens of steps while maintaining sample quality, thereby achieving fast convergence and high-quality generation. Furthermore,
to optimize the performance of transformer-based neural network architectures, we integrate the advanced techniques pioneered by LLaMA into the foundational framework of transformers. This architecture supports stable and efficient training, ensuring robust performance in text-to-audio synthesis. Experimental results on text-to-audio generation and text-to-music synthesis tasks demonstrate that AudioLCM needs only 2 iterations to synthesize high-fidelity audios, while it maintains sample quality competitive with state-of-the-art models using hundreds of steps. AudioLCM enables a sampling speed of 333x faster than real-time on a single NVIDIA 4090Ti GPU, making generative models practically applicable to text-to-audio generation deployment. Our extensive preliminary analysis shows that each design in AudioLCM is effective.
\footnote{Audio samples are available at \url{https://AudioLCM.github.io/.}}~\footnote{Code is Available at~\url{https://github.com/Text-to-Audio/AudioLCM}} | EchoAudio: Efficient and High-Quality Text-to-Audio Generation with Minimal Inference Steps | [
"Huadai Liu",
"Rongjie Huang",
"Yang Liu",
"Hengyuan Cao",
"Jialei Wang",
"Xize Cheng",
"Siqi Zheng",
"Zhou Zhao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Og8PCNxZ7g | @inproceedings{
zhang2024lanevil,
title={LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions},
author={Tianyuan Zhang and Lu Wang and Hainan Li and Yisong Xiao and Siyuan Liang and Aishan Liu and Xianglong Liu and Dacheng Tao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Og8PCNxZ7g}
} | Lane detection (LD) is an essential component of autonomous driving systems, providing fundamental functionalities like adaptive cruise control and automated lane centering. Existing LD benchmarks primarily focus on evaluating common cases, neglecting the robustness of LD models against environmental illusions such as shadows and tire marks on the road. This research gap poses significant safety challenges since these illusions exist naturally in real-world traffic situations. For the first time, this paper studies the potential threats caused by these environmental illusions to LD and establishes the first comprehensive benchmark LanEvil for evaluating the robustness of LD against this natural corruption. We systematically design 14 prevalent yet critical types of environmental illusions (e.g., shadow, reflection) that cover a wide spectrum of real-world influencing factors in LD tasks. Based on real-world environments, we create 94 realistic and customizable 3D cases using the widely used CARLA simulator, resulting in a dataset comprising 90,292 sampled images. Through extensive experiments, we benchmark the robustness of popular LD methods using LanEvil, revealing substantial performance degradation (-5.37% Accuracy and -10.70% F1-Score on average), with shadow effects posing the greatest risk (-7.39% Accuracy). Additionally, we assess the performance of commercial auto-driving systems OpenPilot and Apollo through collaborative simulations, demonstrating that proposed environmental illusions can lead to incorrect decisions and potential traffic accidents. To defend against environmental illusions, we propose the Attention Area Mixing (AAM) approach using hard examples, which witness significant robustness improvement (+3.76%) under illumination effects. We hope our paper can contribute to advancing more robust auto-driving systems in the future. Part of our dataset and demos can be found at the anonymous website. | LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions | [
"Tianyuan Zhang",
"Lu Wang",
"Hainan Li",
"Yisong Xiao",
"Siyuan Liang",
"Aishan Liu",
"Xianglong Liu",
"Dacheng Tao"
] | Conference | poster | 2406.00934 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
Subsets and Splits