bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 215
445
| abstract
stringlengths 820
2.37k
| title
stringlengths 24
147
| authors
sequencelengths 1
13
⌀ | id
stringclasses 1
value | type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 33
values | n_linked_authors
int64 -1
4
| upvotes
int64 -1
21
| num_comments
int64 -1
4
| n_authors
int64 -1
11
| Models
sequencelengths 0
1
| Datasets
sequencelengths 0
1
| Spaces
sequencelengths 0
4
| old_Models
sequencelengths 0
1
| old_Datasets
sequencelengths 0
1
| old_Spaces
sequencelengths 0
4
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=ZMMsLqOC4y | @inproceedings{
yu2024pffaa,
title={{PFFAA}: Prototype-based Feature and Frequency Alteration Attack for Semantic Segmentation},
author={Zhidong Yu and Zhenbo Shi and Xiaoman Liu and Wei Yang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=ZMMsLqOC4y}
} | Recent research has confirmed the possibility of adversarial attacks on deep models. However, these methods typically assume that the surrogate model has access to the target domain, which is difficult to achieve in practical scenarios. To address this limitation, this paper introduces a novel cross-domain attack method tailored for semantic segmentation, named Prototype-based Feature and Frequency Alteration Attack (PFFAA). This approach empowers a surrogate model to efficiently deceive the black-box victim model without requiring access to the target data. Specifically, through limited queries on the victim model, bidirectional relationships are established between the target classes of the victim model and the source classes of the surrogate model, enabling the extraction of prototypes for these classes. During the attack process, the features of each source class are perturbed to move these features away from their respective prototypes, thereby manipulating the feature space. Moreover, we propose substituting frequency information from images used to train the surrogate model into the frequency domain of the test images to modify texture and structure, thus further enhancing the attack efficacy. Experimental results across multiple datasets and victim models validate that PFFAA achieves state-of-the-art attack performances. | PFFAA: Prototype-based Feature and Frequency Alteration Attack for Semantic Segmentation | [
"Zhidong Yu",
"Zhenbo Shi",
"Xiaoman Liu",
"Wei Yang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=ZDeczo36KR | @inproceedings{
huang2024soap,
title={{SOAP}: Enhancing Spatio-Temporal Relation and Motion Information Capturing for Few-Shot Action Recognition},
author={Wenbo Huang and Jinghui Zhang and Xuwei Qian and Zhen Wu and Meng Wang and Lei Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=ZDeczo36KR}
} | High frame-rate (HFR) videos of action recognition improve fine-grained expression while reducing the spatio-temporal relation and motion information density. Thus, large amounts of video samples are continuously required for traditional data-driven training. However, samples are not always sufficient in real-world scenarios promotes few-shot action recognition (FSAR) research. We observe that most recent FSAR works build spatio-temporal relation of video samples via temporal alignment after spatial feature extraction, cutting apart spatial and temporal features within samples. They also capture motion information via narrow perspectives between adjacent frames without considering density, leading to insufficient motion information capturing. Therefore, we propose a novel plug-and-play architecture for FSAR called $\underline{\textbf{S}}$patio-temp$\underline{\textbf{O}}$ral fr$\underline{\textbf{A}}$me tu$\underline{\textbf{P}}$le enhancer ($\textbf{SOAP}$) in this paper. The model we designed with such architecture refers to SOAP-Net. Temporal connections between different feature channels and spatio-temporal relation of features are considered instead of simple feature extraction. Comprehensive motion information is also captured, using frame tuples with multiple frames containing more motion information than adjacent frames. Combining frame tuples of frame counts further provides a broader perspective. SOAP-Net achieves new state-of-the-art performance across well-known benchmarks such as SthSthV2, Kinetics, UCF101, and HMDB51. Extensive empirical evaluations underscore the competitiveness, pluggability, generalization, and robustness of SOAP. The code will be released. | SOAP: Enhancing Spatio-Temporal Relation and Motion Information Capturing for Few-Shot Action Recognition | [
"Wenbo Huang",
"Jinghui Zhang",
"Xuwei Qian",
"Zhen Wu",
"Meng Wang",
"Lei Zhang"
] | Conference | poster | 2407.16344 | [
"https://github.com/wenbohuang1002/soap"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=Z8ViXPfcUr | @inproceedings{
liao2024selection,
title={Selection and Reconstruction of Key Locals: A Novel Specific Domain Image-Text Retrieval Method},
author={Yu Liao and Xinfeng Zhang and Rui Yang and Jianwei Tao and Bai Liu and Zhipeng Hu and Shuang Wang and Zeng Zhao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Z8ViXPfcUr}
} | In recent years, Vision-Language Pre-training (VLP) models have demonstrated rich prior knowledge for multimodal alignment, prompting investigations into their application in Specific Domain Image-Text Retrieval (SDITR) such as Text-Image Person Re-identification (TIReID) and Remote Sensing Image-Text Retrieval (RSITR). Due to the unique data characteristics in specific scenarios, the primary challenge is to leverage discriminative fine-grained local information for improved mapping of images and text into a shared space. Current approaches interact with all multimodal local features for alignment, implicitly focusing on discriminative local information to distinguish data differences, which may bring noise and uncertainty. Furthermore, their VLP feature extractors like CLIP often focus on instance-level representations, potentially reducing the discriminability of fine-grained local features. To alleviate these issues, we propose an Explicit Key Local information Selection and Reconstruction Framework (EKLSR), which explicitly selects key local information to enhance feature representation. Specifically, we introduce a Key Local information Selection and Fusion (KLSF) that utilizes hidden knowledge from the VLP model to select interpretably and fuse key local information. Secondly, we employ Key Local segment Reconstruction (KLR) based on multimodal interaction to reconstruct the key local segments of images (text), significantly enriching their discriminative information and enhancing both inter-modal and intra-modal interaction alignment. To demonstrate the effectiveness of our approach, we conducted experiments on five datasets across TIReID and RSITR. Notably, our EKLSR model achieves state-of-the-art performance on two RSITR datasets. | Selection and Reconstruction of Key Locals: A Novel Specific Domain Image-Text Retrieval Method | [
"Yu Liao",
"Xinfeng Zhang",
"Rui Yang",
"Jianwei Tao",
"Bai Liu",
"Zhipeng Hu",
"Shuang Wang",
"Zeng Zhao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Z4OFYzz7ab | @inproceedings{
yang2024multimodalaware,
title={Multimodal-aware Multi-intention Learning for Recommendation},
author={Wei Yang and Qingchen Yang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Z4OFYzz7ab}
} | Whether it is an e-commerce platform or a short video platform, the effective use of multi-modal data plays an important role in the recommendation system. More and more researchers are exploring how to effectively use multimodal signals to entice more users to buy goods or watch short videos. Some studies have added multimodal features as side information to the model and achieved certain results. In practice, the purchase behavior of users mainly depends on some subjective intentions of users. However, it is difficult for neural networks to effectively process noise information and extract high-level intention information. To investigate the benefits of latent intentions and leverage them effectively for recommendation, we propose a Multimodal-aware Multi-intention Learning method for recommendation (MMIL). Specifically, we establish the relationship between intention and recommendation objective based on probability formula, and propose a multi-intention recommendation optimization objective which can avoid intention overfitting. We then construct an intent representation learner to learn accurate multiple intent representations. Further, considering the close relationship between user intent and multimodal signals, we introduce modal attention mechanisms to learn modal perceived intent representations. In addition, we design a multi-intention comparison module to assist the learning of multiple intention representations. On three real-world data sets, the proposed MMIL method outperforms other advanced methods. The effectiveness of intention modeling and intention contrast module is verified by comprehensive experiments. | Multimodal-aware Multi-intention Learning for Recommendation | [
"Wei Yang",
"Qingchen Yang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Z3PWWa62oB | @inproceedings{
qu2024visualsemantic,
title={Visual-Semantic Decomposition and Partial Alignment for Document-based Zero-Shot Learning},
author={Xiangyan Qu and Jing Yu and Keke Gai and Jiamin Zhuang and Yuanmin Tang and Gang Xiong and Gaopeng Gou and Qi Wu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Z3PWWa62oB}
} | Recent work shows that documents from encyclopedias serve as helpful auxiliary information for zero-shot learning. Existing methods align the entire semantics of a document with corresponding images to transfer knowledge. However, they disregard that semantic information is not equivalent between them, resulting in a suboptimal alignment. In this work, we propose a novel network to extract multi-view semantic concepts from documents and images and align the matching rather than entire concepts. Specifically, we propose a semantic decomposition module to generate multi-view semantic embeddings from visual and textual sides, providing the basic concepts for partial alignment. To alleviate the issue of information redundancy among embeddings, we propose the local-to-semantic variance loss to capture distinct local details and multiple semantic diversity loss to enforce orthogonality among embeddings. Subsequently, two losses are introduced to partially align visual-semantic embedding pairs according to their semantic relevance at the view and word-to-patch levels. Consequently, we consistently outperform state-of-the-art methods under two document sources in three standard benchmarks for document-based zero-shot learning. Qualitatively, we show that our model learns the interpretable partial semantic association. The code is available at https://anonymous.4open.science/r/EmDepart. | Visual-Semantic Decomposition and Partial Alignment for Document-based Zero-Shot Learning | [
"Xiangyan Qu",
"Jing Yu",
"Keke Gai",
"Jiamin Zhuang",
"Yuanmin Tang",
"Gang Xiong",
"Gaopeng Gou",
"Qi Wu"
] | Conference | poster | 2407.15613 | [
"https://github.com/morningstarovo/emdepart"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=Z2MCyPBT7R | @inproceedings{
han2024erlmr,
title={{ERL}-{MR}: Harnessing the Power of Euler Feature Representations for Balanced Multi-modal Learning},
author={Weixiang Han and Chengjun Cai and Guo Yu and Jialiang Peng},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Z2MCyPBT7R}
} | Multi-modal learning leverages data from diverse perceptual media to obtain enriched representations, thereby empowering machine learning models to complete more complex tasks. However, recent research results indicate that multi-modal learning still suffers from “modality imbalance”: Certain modalities' contributions are suppressed by dominant ones, consequently constraining the overall performance enhancement of multimodal learning. To tackle this issue, current approaches attempt to mitigate modality competition in various ways, but their effectiveness is still limited. To this end, we propose an Euler Representation Learning-based Modality Rebalance (ERL-MR) strategy, which reshapes the underlying competitive relationships between modalities into mutually reinforcing win-win situations while maintaining stable feature optimization directions. Specifically, ERL-MR employs Euler's formula to map original features to complex space, constructing cooperatively enhanced non-redundant features for each modality, which helps reverse the situation of modality competition. Moreover, to counteract the performance degradation resulting from optimization drift among modalities, we propose a Multi-Modal Constrained (MMC) loss based on cosine similarity of complex feature phase and cross-entropy loss of individual modalities, guiding the optimization direction of the fusion network. Extensive experiments conducted on four multi-modal multimedia datasets and two task-specific multi-modal multimedia datasets demonstrate the superiority of our ERL-MR strategy over state-of-the-art baselines, achieving modality rebalancing and further performance improvements. | ERL-MR: Harnessing the Power of Euler Feature Representations for Balanced Multi-modal Learning | [
"Weixiang Han",
"Chengjun Cai",
"Guo Yu",
"Jialiang Peng"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=YxGBSyJRRa | @inproceedings{
li2024domain,
title={Domain Knowledge Enhanced Vision-Language Pretrained Model for Dynamic Facial Expression Recognition},
author={Liupeng Li and Yuhua Zheng and Shupeng Liu and Xiaoyin Xu and Taihao Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=YxGBSyJRRa}
} | Dynamic facial expression recognition (DFER) is a rapidly developing field that focuses on recognizing facial expressions in video sequences. However, the complex temporal modeling caused by noisy frames, along with the limited training data significantly hinder the further development of DFER. Previous efforts in this domain have been limited as they tackled these issues separately. Inspired by recent advances of pretrained vision-language models (e.g., CLIP), we propose to leverage it to jointly address the two limitations in DFER. Since the raw CLIP model lacks the ability to model temporal relationships and determine the optimal task-related textual prompts, we utilize DFER-specific domain knowledge, including characteristics of temporal correlations and relationships between facial behavior descriptions at different levels, to guide the adaptation of CLIP to DFER. Specifically, we propose enhancements to CLIP's visual encoder through the design of a hierarchical video encoder that captures both short- and long-term temporal correlations in DFER. Meanwhile, we align facial expressions with action units through prior knowledge to construct semantically rich textual prompts, which are further enhanced with visual contents. Furthermore, we introduce a class-aware consistency regularization mechanism that adaptively filters out noisy frames, bolstering the model's robustness against interference. Extensive experiments on three in-the-wild dynamic facial expression datasets demonstrate that our method outperforms the state-of-the-art DFER approaches. The code is available at https://github.com/liliupeng28/DK-CLIP. | Domain Knowledge Enhanced Vision-Language Pretrained Model for Dynamic Facial Expression Recognition | [
"Liupeng Li",
"Yuhua Zheng",
"Shupeng Liu",
"Xiaoyin Xu",
"Taihao Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Yx2HjWZcZI | @inproceedings{
zhou2024gaussian,
title={Gaussian Splatting With Neural Basis Extension},
author={Zhi Zhou and Junke Zhu and ZhangJin Huang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Yx2HjWZcZI}
} | The 3D Gaussian Splatting(3D-GS) method has recently sparked a new revolution in novel view synthesis with its remarkable visual effects and fast rendering speed. However, its reliance on simple spherical harmonics for color representation leads to subpar performance in complex scenes, struggling with effects like specular highlights, light refraction, etc. Also, 3D-GS adopts a periodic split strategy, significantly increasing the model's disk space and hindering rendering efficiency. To tackle these challenges, we introduce Gaussian Splatting with Neural Basis Extension (GSNB), a novel approach that substantially improves the performance of 3D-GS in demanding scenes while reducing storage consumption. Drawing inspiration from basis function, GSNB employs a light-weight MLP to share feature coefficients with spherical harmonics and extends the color calculation of 3D Gaussians for more precise visual effect modeling. This combination enables GSNB to achieve impressive results in scenes with challenging lighting and reflection conditions. Moreover, GSNB utilizes pre-computation to bake the network's output, thereby alleviating inference workload and subsequent speed loss. Furthermore, to leverage the capabilities of Neural Basis Extension and eliminate redundant Gaussians, we propose a new importance criterion to prune the converged Gaussian model and obtain a more compact representation through re-optimization. Experimental results demonstrate that our method delivers high-quality rendering in most scenarios and effectively reduces redundant Gaussians without compromising rendering speed. Our code and real-time demos will be released soon. | Gaussian Splatting With Neural Basis Extension | [
"Zhi Zhou",
"Junke Zhu",
"ZhangJin Huang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=YrdS3abahU | @inproceedings{
zhang2024tag,
title={Tag Tree-Guided Multi-grained Alignment for Multi-Domain Short Video Recommendation},
author={Yuting Zhang and Zhao Zhang and Yiqing Wu and Ying Sun and Fuzhen Zhuang and Wenhui Yu and Lantao Hu and Han Li and Kun Gai and Zhulin An and Yongjun Xu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=YrdS3abahU}
} | Multi-Domain Rcommendation (MDR) aims to leverage data from multiple domains to enhance recommendations through overlapping users or items. However, extreme overlap sparsity in some applications makes it challenging for existing multi-domain models to capture domain-shared information. Moreover, the sparse overlapping users or items result in a cold start problem in every single domain and hinder feature space alignment of different domains, posing a challenge for joint optimization across domains. However, in multi-domain short video recommendation, we identify two key characteristics that can greatly alleviate the overlapping sparsity issue and enable domain alignment. (1) The following relations between users and publishers exhibit strong preferences and a concentration effect, as popular video publishers, who constitute a small portion of all users, are followed by a majority of users across various domains. (2) The tag tree structure shared by all videos can help facilitate multi-grained alignment across multiple domains. Based on these characteristics, we propose tag tree-guided multi-grained alignment with publisher enhancement for multi-domain video recommendation. Our model integrates publisher and tag nodes into the user-video bipartite graph as central nodes, enabling user and video alignment across all domains via graph propagation. Then, we propose a tag tree-guided decomposition method to obtain hierarchical graphs for multi-grained alignment. Further, we design tree-guided contrastive learning methods to capture the intra-level and inter-level node relations respectively. Finally, extensive experiments on two real-world short video recommendation datasets demonstrate the effectiveness of our model. | Tag Tree-Guided Multi-grained Alignment for Multi-Domain Short Video Recommendation | [
"Yuting Zhang",
"Zhao Zhang",
"Yiqing Wu",
"Ying Sun",
"Fuzhen Zhuang",
"Wenhui Yu",
"Lantao Hu",
"Han Li",
"Kun Gai",
"Zhulin An",
"Yongjun Xu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Ygs3EX1qo8 | @inproceedings{
shao2024multimodal,
title={Multimodal Physiological Signals Representation Learning via Multiscale Contrasting for Depression Recognition},
author={Kai Shao and Rui Wang and yixue Hao and Long Hu and Min Chen and Hans Arno Jacobsen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Ygs3EX1qo8}
} | Depression recognition based on physiological signals such as functional near-infrared spectroscopy (fNIRS) and electroencephalogram (EEG) has made considerable progress. However, most existing studies ignore the complementarity and semantic consistency of multimodal physiological signals under the same stimulation task in complex spatio-temporal patterns. In this paper, we introduce a multimodal physiological signals representation learning framework using Siamese architecture via multiscale contrasting for depression recognition (MRLMC). First, fNIRS and EEG are transformed into different but correlated data based on a time-domain data augmentation strategy. Then, we design a spatio-temporal contrasting module to learn the representation of fNIRS and EEG through weight-sharing multiscale spatio-temporal convolution. Furthermore, to enhance the learning of semantic representation associated with stimulation tasks, a semantic consistency contrast module is proposed, aiming to maximize the semantic similarity of fNIRS and EEG. Extensive experiments on publicly available and self-collected multimodal physiological signals datasets indicate that MRLMC outperforms the state-of-the-art models. Moreover, our proposed framework is capable of transferring to multimodal time series downstream tasks. We will release the code and weights after review. | Multimodal Physiological Signals Representation Learning via Multiscale Contrasting for Depression Recognition | [
"Kai Shao",
"Rui Wang",
"yixue Hao",
"Long Hu",
"Min Chen",
"Hans Arno Jacobsen"
] | Conference | poster | 2406.16968 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=Yg7Rk5WzF3 | @inproceedings{
zhang2024learning,
title={Learning Unknowns from Unknowns: Diversified Negative Prototypes Generator for Few-shot Open-Set Recognition},
author={Zhenyu Zhang and Guangyao Chen and Yixiong Zou and Yuhua Li and Ruixuan Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Yg7Rk5WzF3}
} | Few-shot open-set recognition (FSOR) is a challenging task that requires a model to recognize known classes and identify unknown classes with limited labeled data. Existing approaches, particularly Negative-Prototype-Based methods, generate negative prototypes based solely on known class data. However, as the unknown space is infinite while the known-space is limited, these methods suffer from limited representation capability. To address this limitation, we propose a novel approach, termed Diversified Negative Prototypes Generator (DNPG), which adopts the principle of "learning unknowns from unknowns." Our method leverages the unknown space information learned from base classes to generate more representative negative prototypes for novel classes. During the pre-training phase, we learn the unknown space representation of the base classes. This representation, along with inter-class relationships, is then utilized in the meta-learning process to construct negative prototypes for novel classes. To prevent prototype collapse and ensure adaptability to varying data compositions, we introduce the Swap Alignment (SA) module. Our DNPG model, by learning from the unknown space, generates negative prototypes that cover a broader unknown space, thereby achieving state-of-the-art performance on three standard FSOR datasets. We provide the source code in the supplementary materials for reproducibility. | Learning Unknowns from Unknowns: Diversified Negative Prototypes Generator for Few-shot Open-Set Recognition | [
"Zhenyu Zhang",
"Guangyao Chen",
"Yixiong Zou",
"Yuhua Li",
"Ruixuan Li"
] | Conference | poster | 2408.13373 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=YZ68Ifi4yH | @inproceedings{
cai2024avdeepfakem,
title={{AV}-Deepfake1M: A Large-Scale {LLM}-Driven Audio-Visual Deepfake Dataset},
author={Zhixi Cai and Shreya Ghosh and Aman Pankaj Adatia and Munawar Hayat and Abhinav Dhall and Tom Gedeon and Kalin Stefanov},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=YZ68Ifi4yH}
} | The detection and localization of highly realistic deepfake audio-visual content are challenging even for the most advanced state-of-the-art methods. While most of the research efforts in this domain are focused on detecting high-quality deepfake images and videos, only a few works address the problem of the localization of small segments of audio-visual manipulations embedded in real videos. In this research, we emulate the process of such content generation and propose the AV-Deepfake1M dataset. The dataset contains content-driven (i) video manipulations, (ii) audio manipulations, and (iii) audio-visual manipulations for more than 2K subjects resulting in a total of more than 1M videos. The paper provides a thorough description of the proposed data generation pipeline accompanied by a rigorous analysis of the quality of the generated data. The comprehensive benchmark of the proposed dataset utilizing state-of-the-art deepfake detection and localization methods indicates a significant drop in performance compared to previous datasets. The proposed dataset will play a vital role in building the next-generation deepfake localization methods. The dataset and associated code will be made public. | AV-Deepfake1M: A Large-Scale LLM-Driven Audio-Visual Deepfake Dataset | [
"Zhixi Cai",
"Shreya Ghosh",
"Aman Pankaj Adatia",
"Munawar Hayat",
"Abhinav Dhall",
"Tom Gedeon",
"Kalin Stefanov"
] | Conference | oral | 2311.15308 | [
"https://github.com/controlnet/av-deepfake1m"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=YTNN0mOPQN | @inproceedings{
zhang2024spatialtemporal,
title={Spatial-Temporal Context Model for Remote Sensing Imagery Compression},
author={Jinxiao Zhang and Runmin Dong and Juepeng Zheng and Mengxuan Chen and Lixian Zhang and Yi Zhao and Haohuan Fu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=YTNN0mOPQN}
} | With the increasing spatial and temporal resolutions of obtained remote sensing (RS) images, effective compression becomes critical for storage, transmission, and large-scale in-memory processing. Although image compression methods achieve a series of breakthroughs for daily images, a straightforward application of these methods to RS domain underutilizes the properties of the RS images, such as content duplication, homogeneity, and temporal redundancy. This paper proposes a Spatial-Temporal Context model (STCM) for RS image compression, jointly leveraging context from a broader spatial scope and across different temporal images. Specifically, we propose a stacked diagonal masked module to expand the contextual reference scope, which is stackable and maintains its parallel capability. Furthermore, we propose spatial-temporal contextual adaptive coding to enable the entropy estimation to reference context across different temporal RS images at the same geographic location. Experiments show that our method outperforms previous state-of-the-art compression methods on rate-distortion (RD) performance. For downstream tasks validation, our method reduces the bitrate by 52 times for single temporal images in the scene classification task while maintaining accuracy. | Spatial-Temporal Context Model for Remote Sensing Imagery Compression | [
"Jinxiao Zhang",
"Runmin Dong",
"Juepeng Zheng",
"Mengxuan Chen",
"Lixian Zhang",
"Yi Zhao",
"Haohuan Fu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=YRr7CICk5a | @inproceedings{
tongshunzhang2024dmfourllie,
title={{DMF}our{LLIE}: Dual-Stage and Multi-Branch Fourier Network for Low-Light Image Enhancement},
author={Tongshun Zhang and Pingping Liu and Ming Zhao and Haotian Lv},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=YRr7CICk5a}
} | In the Fourier frequency domain, luminance information is primarily encoded in the amplitude component, while spatial structure information is significantly contained within the phase component. Existing low-light image enhancement techniques using Fourier transform have mainly focused on amplifying the amplitude component and simply replicating the phase component, an approach that often leads to color distortions and noise issues. In this paper, we propose a Dual-Stage Multi-Branch Fourier Low-Light Image Enhancement (DMFourLLIE) framework to address these limitations by emphasizing the phase component's role in preserving image structure and detail. The first stage integrates structural information from infrared images to enhance the phase component and employs a luminance-attention mechanism in the luminance-chrominance color space to precisely control amplitude enhancement. The second stage combines multi-scale and Fourier convolutional branches for robust image reconstruction, effectively recovering spatial structures and textures. This dual-branch joint optimization process ensures that complex image information is retained, overcoming the limitations of previous methods that neglected the interplay between amplitude and phase. Extensive experiments across multiple datasets demonstrate that DMFourLLIE outperforms current state-of-the-art methods in low-light image enhancement. | DMFourLLIE: Dual-Stage and Multi-Branch Fourier Network for Low-Light Image Enhancement | [
"Tongshun Zhang",
"Pingping Liu",
"Ming Zhao",
"Haotian Lv"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=YHOcexl2fT | @inproceedings{
zhang2024mddrmultimodal,
title={{MDDR}:Multi-modal Dual-Attention aggregation for Depression Recognition},
author={Wei Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=YHOcexl2fT}
} | Automated diagnosis of depression is crucial for early detection and timely intervention. Previous research has largely concentrated on visual indicators, often neglecting the value of leveraging a variety of data types. Although some studies have attempted to employ multiple modalities, they typically fall short in investigating the complex dynamics between features from various modalities over time. To address this challenge, we present an innovative Multi-modal Dual-Attention Aggregation Architecture for Depression Recognition (MDDR). This framework capitalizes on multi-modal pre-trained features and introduces two attention aggregation mechanisms: the Feature Alignment and Aggregation (FAA) module and the Sequence Encoding and Aggregation (SEA) module. The FAA module is designed to dynamically evaluate the relevance of multi-modal features for each instance, facilitating a dynamic integration of these features over time. Following this, the SEA module determines the importance of the amalgamated features for each frame, ensuring that aggregation is conducted based on their significance, to extract the most relevant features for accurately diagnosing depression. Moreover, we propose a unique loss calculation method specifically designed for depression assessment, named DRLoss. Our approach, evaluated on the AVEC2013 and AVEC2014 depression audiovisual datasets, achieves unparalleled performance. | MDDR:Multi-modal Dual-Attention aggregation for Depression Recognition | [
"Wei Zhang"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=YFOjMPQkJc | @inproceedings{
xie2024adaptive,
title={Adaptive Pruning of Channel Spatial Dependability in Convolutional Neural Networks},
author={Weiying Xie and Mei Yuan and Jitao Ma and Yunsong Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=YFOjMPQkJc}
} | Deep Convolutional Neural Networks (CNNs) have demonstrated excellent performance in various multimedia application scenarios. However, complex models often require significant computational resources and energy costs. Therefore, CNN compression is crucial for addressing deployment challenges of multimedia application on resource constrained edge devices. However, existing CNN channel pruning strategies primarily focus on the "weights" or "activations" of the model, overlooking its "interpretability" information. In this paper, we explore CNN pruning strategies from the perspective of model interpretability. We model the correspondence between channel feature maps and interpretable visual perception based on class saliency maps, aiming to assess the contribution of each channel to the desired output. Additionally, we utilize Discrete Wavelet Transform (DWT) to capture the global features and structure of class saliency maps. Based on this, we propose a Channel Spatial Dependability (CSD) metric, evaluating the importance and contribution of channels in a bidirectional manner to guide model quantization pruning. And we dynamically adjust the pruning rate of each layer based on performance changes, in order to achieve more accurate and efficient adaptive pruning. Experimental results demonstrate that our method achieves significant results across a range of different networks and datasets. For instance, we achieved a 51.3% pruning on the ResNet-56 model while maintaining an accuracy of 94.16%, outperforming feature-map or weight-based pruning and other State-of-the-Art (SOTA). | Adaptive Pruning of Channel Spatial Dependability in Convolutional Neural Networks | [
"Weiying Xie",
"Mei Yuan",
"Jitao Ma",
"Yunsong Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=YEdV3dCi2J | @inproceedings{
fang2024sammil,
title={{SAM}-{MIL}: A Spatial Contextual Aware Multiple Instance Learning Approach for Whole Slide Image Classification},
author={Heng Fang and Sheng Huang and Wenhao Tang and Luwen Huangfu and Bo Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=YEdV3dCi2J}
} | Multiple Instance Learning (MIL) represents the predominant framework in Whole Slide Image (WSI) classification, covering aspects such as sub-typing, diagnosis, and beyond. Current MIL models predominantly rely on instance-level features derived from pretrained models such as ResNet. These models segment each WSI into independent patches and extract features from these local patches, leading to a significant loss of global spatial context and restricting the model's focus to merely local features. To address this issue, we propose a novel MIL framework, named SAM-MIL, that emphasizes spatial contextual awareness and explicitly incorporates spatial context by extracting comprehensive, image-level information. The Segment Anything Model (SAM) represents a pioneering visual segmentation foundational model that can capture segmentation features without the need for additional fine-tuning, rendering it an outstanding tool for extracting spatial context directly from raw WSIs. Our approach includes the design of group feature extraction based on spatial context and a SAM-Guided Group Masking strategy to mitigate class imbalance issues. We implement a dynamic mask ratio for different segmentation categories and supplement these with representative group features of categories. Moreover, SAM-MIL divides instances to generate additional pseudo-bags, thereby augmenting the training set, and introduces consistency of spatial context across pseudo-bags to further enhance the model's performance. Experimental results on the CAMELYON-16 and TCGA lung cancer datasets demonstrate that our proposed SAM-MIL model outperforms existing mainstream methods in WSIs classification. | SAM-MIL: A Spatial Contextual Aware Multiple Instance Learning Approach for Whole Slide Image Classification | [
"Heng Fang",
"Sheng Huang",
"Wenhao Tang",
"Luwen Huangfu",
"Bo Liu"
] | Conference | poster | 2407.17689 | [
"https://github.com/fangheng/sam-mil"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=YE7G4Soi7k | @inproceedings{
zhou2024deciphering,
title={Deciphering Perceptual Quality in Colored Point Cloud: Prioritizing Geometry or Texture Distortion?},
author={Xuemei Zhou and Irene Viola and Yunlu Chen and Jiahuan Pei and Pablo Cesar},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=YE7G4Soi7k}
} | Point cloud contents represent one of the prevalent formats for 3D representations. Distortions introduced at various stages in the
point cloud processing pipeline affect the visual quality, altering their geometric composition, texture information, or both. Understanding
and quantifying the impact of the distortion domain on visual quality is vital to driving rate optimization and guiding postprocessing
steps to improve the overall quality of experience. In this paper, we propose a multi-task guided multi-modality no reference
metric for measuring the quality of colored point clouds (M3-Unity), which utilizes 4 types of modalities across different attributes and
dimensionalities to represent point clouds. An attention mechanism establishes inter/intra associations among 3D/2D patches,
which can complement each other, yielding both local and global features, to fit the highly nonlinear property of the human vision
system. A multi-task decoder involving distortion type classification selects the best combination among 4 modalities based on the
specific distortion type, aiding the regression task and enabling the in-depth analysis of the interplay between geometrical and textural
distortions. Furthermore, our framework design and attention strategy enable us to measure the impact of individual attributes and
their combinations, providing insights into how these associations contribute particularly in relation to distortion type. Experimental
results demonstrate that our method effectively predicts the visual quality of point clouds, achieving state-of-the-art performance on
four benchmark datasets. The code will be released. | Deciphering Perceptual Quality in Colored Point Cloud: Prioritizing Geometry or Texture Distortion? | [
"Xuemei Zhou",
"Irene Viola",
"Yunlu Chen",
"Jiahuan Pei",
"Pablo Cesar"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Y28oc8ctPb | @inproceedings{
shen2024hmradapter,
title={{HMR}-Adapter: A Lightweight Adapter with Dual-Path Cross Augmentation for Expressive Human Mesh Recovery},
author={Wenhao Shen and Wanqi Yin and Hao Wang and Chen Wei and Zhongang Cai and Lei Yang and Guosheng Lin},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Y28oc8ctPb}
} | Expressive Human Mesh Recovery (HMR) involves reconstructing the 3D human body, including hands and face, from RGB images. It is difficult because humans are highly deformable, and hands are small and frequently occluded. Recent approaches have attempted to mitigate these issues using large datasets and models, but these solutions remain imperfect. Specifically, whole-body estimation models often inaccurately estimate hand poses, while hand expert models struggle with severe occlusions. To overcome these limitations, we introduce a dual-path cross augmentation framework with a novel adaptation approach called HMR-Adapter that enhances the decoding module of large HMR models. HMR-Adapter significantly improves expressive HMR performance by injecting additional guidance from other body parts. This approach refines hand pose predictions by incorporating body pose information and uses additional hand features to enhance body pose estimation in whole-body models. Remarkably, a HMR-Adapter with only about 27M parameters achieves better performance in fine-tuning the large model on a target dataset. Furthermore, HMR-Adapter significantly improves expressive HMR results by combining the adapted large whole-body and hand expert models. We show extensive experiments and analysis to demonstrate the efficacy of our method. | HMR-Adapter: A Lightweight Adapter with Dual-Path Cross Augmentation for Expressive Human Mesh Recovery | [
"Wenhao Shen",
"Wanqi Yin",
"Hao Wang",
"Chen Wei",
"Zhongang Cai",
"Lei Yang",
"Guosheng Lin"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Xvq1KzaSXs | @inproceedings{
rossetto2024estimating,
title={Estimating the Semantic Density of Visual Media},
author={Luca Rossetto and Cristina Sarasua and Abraham Bernstein},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Xvq1KzaSXs}
} | Image descriptions provide precious information for a myriad of visual media management tasks ranging from image classification to image search. The value of such curated collections comes from their diverse content and their accompanying extensive annotations.
Such annotations are typically supplied by communities, where users (often volunteers) curate labels and/or descriptions of images.
Supporting users in their quest to increase (overall) description completeness where possible is, therefore, of utmost importance.
In this paper, we introduce the notion of visual semantic density, which we define as the amount of information necessary to describe an image comprehensively such that the image content can be accurately inferred from the description.
Together with the already existing annotations, this measure can estimate the annotation completeness, helping to identify collection content with missing annotations.
We conduct user experiments to understand how humans perceive visual semantic density in different image collections to identify suitable proxy measures for our notion of visual semantic density.
We find that extensive image captions can serve as a proxy to calculate an image's semantic density.
Furthermore, we implement a visual semantic density estimator capable of approximating the human perception of the measure.
We evaluate the performance of this estimator on several image datasets, concluding that it is feasible to sort images automatically by their visual semantic density, thereby allowing for the efficient scheduling of annotation tasks.
Consequently, we believe that the visual semantic density estimation process can be used as a completeness measure to give feedback to annotating users in diverse visual content ecosystems, such as Wikimedia Commons. | Estimating the Semantic Density of Visual Media | [
"Luca Rossetto",
"Cristina Sarasua",
"Abraham Bernstein"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=XuEviauUhY | @inproceedings{
sirejiding2024taskinteractionfree,
title={Task-Interaction-Free Multi-Task Learning with Efficient Hierarchical Feature Representation},
author={Shalayiding Sirejiding and Bayram Bayramli and Yuxiang Lu and Yuwen Yang and Tamam Alsarhan and Hongtao Lu and Yue Ding},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=XuEviauUhY}
} | Traditional multi-task learning often relies on explicit task interaction mechanisms to enhance multi-task performance. However, these approaches encounter challenges such as negative transfer when jointly learning multiple weakly correlated tasks. Additionally, these methods handle encoded features at a large scale, which escalates computational complexity to ensure dense prediction task performance. In this study, we introduce a Task-Interaction-Free Network (TIF) for multi-task learning, which diverges from explicitly designed task interaction mechanisms. Firstly, we present a Scale Attentive-Feature Fusion Module (SAFF) to enhance each scale in the shared encoder to have rich task-agnostic encoded features. Subsequently, our proposed task and scale-specific decoders efficiently decode the enhanced features shared across tasks without necessitating task-interaction modules. Concretely, we utilize a Self-Feature Distillation Module (SFD) to explore task-specific features at lower scales and the Low-To-High Scale Feature Diffusion Module (LTHD) to diffuse global pixel relationships from low-level to high-level scales. Experiments on publicly available multi-task learning datasets validate that our TIF attains state-of-the-art performance. | Task-Interaction-Free Multi-Task Learning with Efficient Hierarchical Feature Representation | [
"Shalayiding Sirejiding",
"Bayram Bayramli",
"Yuxiang Lu",
"Yuwen Yang",
"Tamam Alsarhan",
"Hongtao Lu",
"Yue Ding"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=XtnTav9T4g | @inproceedings{
zhang2024hypertime,
title={HyperTime: Hyperparameter Optimization for Combating Temporal Distribution Shifts},
author={Shaokun Zhang and Yiran Wu and Zhonghua Zheng and Qingyun Wu and Chi Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=XtnTav9T4g}
} | In this work, we propose a hyperparameter optimization method named HyperTime to find hyperparameters robust to potential temporal distribution shifts in the unseen test data. Our work is motivated by an important observation that it is, in many cases, possible to achieve temporally robust predictive performance via hyperparameter optimization. Based on this observation, we leverage the ‘worst-case-oriented’ philosophy from the robust optimization literature to help find such robust hyperparameter configurations. HyperTime imposes a lexicographic priority order on average validation loss and worst-case validation loss over chronological validation sets. We perform a theoretical analysis on the upper bound of the expected test loss, which reveals the unique advantages of our approach. We also demonstrate the strong empirical performance of the proposed method on multiple machine learning tasks with temporal distribution shifts. | HyperTime: Hyperparameter Optimization for Combating Temporal Distribution Shifts | [
"Shaokun Zhang",
"Yiran Wu",
"Zhonghua Zheng",
"Qingyun Wu",
"Chi Wang"
] | Conference | poster | 2305.18421 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=Xsg2Abvixi | @inproceedings{
zhang2024meshcentric,
title={Mesh-Centric Gaussian Splatting for Human Avatar Modelling with Real-time Dynamic Mesh Reconstruction},
author={Ruiqi Zhang and Jie Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Xsg2Abvixi}
} | Real-time mesh reconstruction is highly demanded for integrating human avatar in modern computer graphics applications. Current methods typically use coordinate-based MLP to represent 3D scene as Signed Distance Field (SDF) and optimize it through volumetric rendering, relying on Marching Cubes for mesh extraction. However, volumetric rendering lacks training and rendering efficiency, and the dependence on Marching Cubes significantly impacts mesh extraction efficiency. This study introduces a novel approach, Mesh-Centric Gaussian Splatting (MCGS), which introduces a unique representation Mesh-Centric SDF and optimizes it using high-efficiency Gaussian Splatting. The primary innovation introduces Mesh-Centric SDF, a thin layer of SDF enveloping the underlying mesh, and could be efficiently derived from mesh. This derivation of SDF from mesh allows for mesh optimization through SDF, providing mesh as 0 iso-surface, and eliminating the need for slow Marching Cubes. The secondary innovation focuses on optimizing Mesh-Centric SDF with high-efficiency Gaussian Splatting. By dispersing the underlying mesh of Mesh-Centric SDF into multiple layers and generating Mesh-Constrained Gaussians on them, we create Multi-Layer Gaussians. These Mesh-Constrained Gaussians confine Gaussians within a 2D surface space defined by mesh, ensuring an accurate correspondence between Gaussian rendering and mesh geometry. The Multi-Layer Gaussians serve as sampling layers of Mesh-Centric SDF and can be optimized with Gaussian Splatting, which would further optimize Mesh-Centric SDF and its underlying mesh. As a result, our method can directly optimize the underlying mesh through Gaussian Splatting, providing fast training and rendering speeds derived from Gaussian Splatting, as well as precise surface learning of SDF. Experiments demonstrate that our method achieves dynamic mesh reconstruction at over 30 FPS. In contrast, SDF-based methods using Marching Cubes achieve less than 1 FPS, and concurrent 3D Gaussian Splatting-based methods cannot extract reasonable mesh. | Mesh-Centric Gaussian Splatting for Human Avatar Modelling with Real-time Dynamic Mesh Reconstruction | [
"Ruiqi Zhang",
"Jie Chen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=XnfldHcbeB | @inproceedings{
chu2024rayformer,
title={RayFormer: Improving Query-Based Multi-Camera 3D Object Detection via Ray-Centric Strategies},
author={Xiaomeng Chu and Jiajun Deng and Guoliang You and Yifan Duan and Yao Li and Yanyong Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=XnfldHcbeB}
} | The recent advances in query-based multi-camera 3D object detection are featured by initializing object queries in the 3D space, and then sampling features from perspective-view images to perform multi-round query refinement. In such a framework, query points near the same camera ray are likely to sample similar features from very close pixels, resulting in ambiguous query features and degraded detection accuracy. To this end, we introduce RayFormer, a camera-ray-inspired query-based 3D object detector that aligns the initialization and feature extraction of object queries with the optical characteristics of cameras. Specifically, RayFormer transforms perspective-view image features into bird’s eye view (BEV) via the lift-splat-shoot method and segments the BEV map to sectors based on the camera rays. Object queries are uniformly and sparsely initialized along each camera ray, facilitating the projection of different queries onto different areas in the image to extract distinct features. Besides, we leverage the instance information of images to supplement the uniformly initialized object queries by further involving additional queries along the ray from 2D object detection boxes. To extract unique object-level features that cater to distinct queries, we design a ray sampling method that suitably organizes the distribution of feature sampling points on both images and bird’s eye view. Extensive experiments are conducted on the nuScenes dataset to validate our proposed ray-inspired model design. The proposed RayFormer achieves 55.5% mAP and 63.4% NDS, respectively. | RayFormer: Improving Query-Based Multi-Camera 3D Object Detection via Ray-Centric Strategies | [
"Xiaomeng Chu",
"Jiajun Deng",
"Guoliang You",
"Yifan Duan",
"Yao Li",
"Yanyong Zhang"
] | Conference | poster | 2407.14923 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=Xmapdbf1CT | @inproceedings{
xiao2024eggesture,
title={{EGG}esture: Entropy-Guided Vector Quantized Variational AutoEncoder for Co-Speech Gesture Generation},
author={yiyong xiao and Kai Shu and Haoyi Zhang and BaoHuaYin and Wai Seng Cheang and Haoyang Wang and Jiechao Gao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Xmapdbf1CT}
} | Co-Speech gesture generation encounters challenges with imbalanced, long-tailed gesture distributions. While recent methods typically address this by employing Vector Quantized Variational Autoencoder (VQ-VAE), encode gestures into a codebook and classify codebook indices based on audio or text cues. However, due to the imbalanced, the codebook classification tends to bias towards majority gestures, neglecting semantically rich minority gestures. To address this, this paper proposes the Entropy-Guided Co-Speech Gesture Generation (EGGesture). EGGesture leverages an Entropy-Guided VQ-VAE to jointly optimize the distribution of codebook indices and adjust loss weights for codebook index classification, which consists of a) A differentiable approach for entropy computation using Gumbel-Softmax and cosine similarity, facilitating online codebook distribution optimization, and b) a strategy that utilizes computed codebook entropy to collaboratively guide the classification loss weighting. These designs enable the dynamic refinement of the codebook utilization, striking a balance between the quality of the learned gesture representation and the accuracy of the classification phase. Experiments on the Trinity and BEAT datasets demonstrate EGGesture’s state-of-the-art performance both qualitatively and quantitatively. The code and video are available. | EGGesture: Entropy-Guided Vector Quantized Variational AutoEncoder for Co-Speech Gesture Generation | [
"yiyong xiao",
"Kai Shu",
"Haoyi Zhang",
"BaoHuaYin",
"Wai Seng Cheang",
"Haoyang Wang",
"Jiechao Gao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=XkQleH84zK | @inproceedings{
bin2024leveraging,
title={Leveraging Weak Cross-Modal Guidance for Coherence Modelling via Iterative Learning},
author={Yi Bin and Junrong Liao and Yujuan Ding and Haoxuan Li and Yang Yang and See-Kiong Ng and Heng Tao Shen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=XkQleH84zK}
} | Cross-modal coherence modeling is essential for intelligent systems to help them organize and structure information, thereby understanding and creating content of the physical world coherently like human-beings. Previous work on cross-modal coherence modeling attempted to leverage the order information from another modality to assist the coherence recovering of the target modality. Despite of the effectiveness, labeled associated coherency information is not always available and might be costly to acquire, making the cross-modal guidance hard to leverage. To tackle this challenge, this paper explores a new way to take advantage of cross-modal guidance without gold labels on coherency, and proposes the Weak Cross-Modal Guided Ordering (WeGO) model. More specifically, it leverages high-confidence predicted pairwise order in one modality as reference information to guide the coherence modeling in another. An iterative learning paradigm is further designed to jointly optimize the coherence modeling in two modalities with selected guidance from each other. The iterative cross-modal boosting also functions in inference to further enhance coherence prediction in each modality. Experimental results on two public datasets have demonstrated that the proposed method outperforms existing methods for cross-modal coherence modeling tasks. Major technical modules have been evaluated effective through ablation studies. \textcolor{blue}{Codes are available at: \textit{\url{https://github.com/scvready123/IterWeGO}}}. | Leveraging Weak Cross-Modal Guidance for Coherence Modelling via Iterative Learning | [
"Yi Bin",
"Junrong Liao",
"Yujuan Ding",
"Haoxuan Li",
"Yang Yang",
"See-Kiong Ng",
"Heng Tao Shen"
] | Conference | poster | 2408.00305 | [
"https://github.com/scvready123/iterwego"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=XiHC7aSQ56 | @inproceedings{
jia2024generating,
title={Generating Action-conditioned Prompts for Open-vocabulary Video Action Recognition},
author={Chengyou Jia and Minnan Luo and Xiaojun Chang and Zhuohang Dang and Mingfei Han and Mengmeng Wang and Guang Dai and Sizhe Dang and Jingdong Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=XiHC7aSQ56}
} | Exploring open-vocabulary video action recognition is a promising venture, which aims to recognize previously unseen actions within any arbitrary set of categories.
Existing methods typically adapt pretrained image-text models to the video domain, capitalizing on their inherent strengths in generalization. A common thread among such methods is the augmentation of visual embeddings with temporal information to improve the recognition of seen actions. Yet, they compromise with standard less-informative action descriptions, thus faltering when confronted with novel actions. Drawing inspiration from human cognitive processes, we argue that augmenting text embeddings with human prior knowledge is pivotal for open-vocabulary video action recognition. To realize this, we innovatively blend video models with Large Language Models (LLMs) to devise Action-conditioned Prompts. Specifically, we propose the Action-Centric generation strategy to produce a set of descriptive sentences that contain distinctive features for identifying given actions. Building upon this foundation, we further introduce a multi-modal action knowledge alignment mechanism to align concepts in video and textual knowledge encapsulated within the prompts. Extensive experiments on various video benchmarks, including zero-shot, few-shot, and base-to-novel generalization settings, demonstrate that our method not only sets new SOTA performance but also possesses excellent interpretability. | Generating Action-conditioned Prompts for Open-vocabulary Video Action Recognition | [
"Chengyou Jia",
"Minnan Luo",
"Xiaojun Chang",
"Zhuohang Dang",
"Mingfei Han",
"Mengmeng Wang",
"Guang Dai",
"Sizhe Dang",
"Jingdong Wang"
] | Conference | poster | 2312.02226 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=Xf02AI2uxk | @inproceedings{
zhang2024visuallinguistic,
title={Visual-linguistic Cross-domain Feature Learning with Group Attention and Gamma-correct Gated Fusion for Extracting Commonsense Knowledge},
author={Jialu ZHANG and Xinyi Wang and Chenglin Yao and Jianfeng Ren and Xudong Jiang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Xf02AI2uxk}
} | Acquiring commonsense knowledge about entity-pairs from images is crucial across diverse applications. Distantly supervised learning has made significant advancements by automatically retrieving images containing entity pairs and summarizing commonsense knowledge from the bag of images. However, the retrieved images may not always cover all possible relations, and the informative features across the bag of images are often overlooked. To address these challenges, a Multi-modal Cross-domain Feature Learning framework is proposed to incorporate the general domain knowledge from a large vision-text foundation model, ViT-GPT2, to handle unseen relations and exploit complementary information from multiple sources. Then, a Group Attention module is designed to exploit the attentive information from other instances of the same bag to boost the informative features of individual instances. Finally, a Gamma-corrected Gated Fusion is designed to select a subset of informative instances for a comprehensive summarization of commonsense entity relations. Extensive experimental results demonstrate the superiority of the proposed method over state-of-the-art models for extracting commonsense knowledge. | Visual-linguistic Cross-domain Feature Learning with Group Attention and Gamma-correct Gated Fusion for Extracting Commonsense Knowledge | [
"Jialu ZHANG",
"Xinyi Wang",
"Chenglin Yao",
"Jianfeng Ren",
"Xudong Jiang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=XcXR6GwgLW | @inproceedings{
wu2024frequency,
title={Frequency Guidance Matters: Skeletal Action Recognition by Frequency-Aware Mixed Transformer},
author={Wenhan Wu and Ce Zheng and Zihao Yang and Chen Chen and Srijan Das and Aidong Lu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=XcXR6GwgLW}
} | Recently, transformers have demonstrated great potential for modeling long-term dependencies from skeleton sequences and thereby gained ever-increasing attention in skeleton action recognition. However, the existing transformer-based approaches heavily rely on the naive attention mechanism for capturing the spatiotemporal features, which falls short in learning discriminative representations that exhibit similar motion patterns. To address this challenge, we introduce the Frequency-aware Mixed Transformer (FreqMixFormer), specifically designed for recognizing similar skeletal actions with subtle discriminative motions. First, we introduce a frequency-aware attention module to unweave skeleton frequency representations by embedding joint features into frequency attention maps, aiming to distinguish the discriminative movements based on their frequency coefficients. Subsequently, we develop a mixed transformer architecture to incorporate spatial features with frequency features to model the comprehensive frequency-spatial patterns. Additionally, a temporal transformer is proposed to extract the global correlations across frames. Extensive experiments show that FreqMiXFormer outperforms SOTA on 3 popular skeleton action recognition datasets, including NTU RGB+D, NTU RGB+D 120, and NW-UCLA datasets. Codes will be publicly available. | Frequency Guidance Matters: Skeletal Action Recognition by Frequency-Aware Mixed Transformer | [
"Wenhan Wu",
"Ce Zheng",
"Zihao Yang",
"Chen Chen",
"Srijan Das",
"Aidong Lu"
] | Conference | poster | 2407.12322 | [
"https://github.com/wenhanwu95/freqmixformer"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=XZi5N7eLV0 | @inproceedings{
zhuang2024towards,
title={Towards Multimodal-augmented Pre-trained Language Models via Self-balanced Expectation-Maximization Iteration},
author={Xianwei Zhuang and Xuxin Cheng and Zhihong Zhu and Zhanpeng Chen and Hongxiang Li and Yuexian Zou},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=XZi5N7eLV0}
} | Pre-trained language models (PLMs) that rely solely on textual corpus may present limitations in multimodal semantics comprehension. Existing studies attempt to alleviate this issue by incorporating additional modal information through image retrieval or generation. However, these methods: (1) inevitably encounter modality gaps and noise; (2) treat all modalities indiscriminately; and (3) ignore visual or acoustic semantics of key entities. To tackle these challenges, we propose a novel principled iterative framework for multimodal-augmented PLMs termed MASE, which achieves efficient and balanced injection of multimodal semantics under the proposed Expectation Maximization (EM) based iterative algorithm. Initially, MASE utilizes multimodal proxies instead of explicit data to enhance PLMs, which avoids noise and modality gaps. In E-step, MASE adopts a novel information-driven self-balanced strategy to estimate allocation weights. Furthermore, MASE employs heterogeneous graph attention to capture entity-level fine-grained semantics on the proposed multimodal-semantic scene graph. In M-step, MASE injects global multimodal knowledge into PLMs through a cross-modal contrastive loss. Experimental results show that MASE consistently outperforms competitive baselines on multiple tasks across various architectures. More impressively, MASE is compatible with existing efficient parameter fine-tuning methods, such as prompt learning. | Towards Multimodal-augmented Pre-trained Language Models via Self-balanced Expectation-Maximization Iteration | [
"Xianwei Zhuang",
"Xuxin Cheng",
"Zhihong Zhu",
"Zhanpeng Chen",
"Hongxiang Li",
"Yuexian Zou"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=XZf79dpFB8 | @inproceedings{
liao2024when,
title={When, Where, and What? A Benchmark for Accident Anticipation and Localization with Large Language Models},
author={Haicheng Liao and Yongkang Li and Zhenning Li and Chengyue Wang and Yanchen Guan and KaHou Tam and Chunlin Tian and Li Li and Cheng-zhong Xu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=XZf79dpFB8}
} | As autonomous driving systems increasingly become part of daily transportation, the ability to accurately anticipate and mitigate potential traffic accidents is paramount. Traditional accident anticipation models primarily utilizing dashcam videos are adept at predicting when an accident may occur but fall short in localizing the incident and identifying involved entities. Addressing this gap, this study introduces a novel framework that integrates Large Language Models (LLMs) to enhance predictive capabilities across multiple dimensions—what, when, and where accidents might occur. We develop an innovative chain-based attention mechanism that dynamically adjusts to prioritize high-risk elements within complex driving scenes. This mechanism is complemented by a three-stage model that processes outputs from smaller models into detailed multimodal inputs for LLMs, thus enabling a more nuanced understanding of traffic dynamics. Empirical validation on the DAD, CCD, and A3D datasets demonstrates superior performance in Average Precision (AP) and Mean Time-To-Accident (mTTA), establishing new benchmarks for accident prediction technology. Our approach not only advances the technological framework for autonomous driving safety but also enhances human-AI interaction, making the predictive insights generated by autonomous systems more intuitive and actionable. | When, Where, and What? A Benchmark for Accident Anticipation and Localization with Large Language Models | [
"Haicheng Liao",
"Yongkang Li",
"Zhenning Li",
"Chengyue Wang",
"Yanchen Guan",
"KaHou Tam",
"Chunlin Tian",
"Li Li",
"Cheng-zhong Xu"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=XY8iqCpOBF | @inproceedings{
sun2024audiodriven,
title={Audio-Driven Identity Manipulation for Face Inpainting},
author={Yuqi Sun and Qing Lin and Weimin Tan and Bo Yan},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=XY8iqCpOBF}
} | Recent advances in multimodal artificial intelligence have greatly improved the integration of vision-language-audio cues to enrich the content creation process. Inspired by these developments, in this paper, we first integrate audio into the face inpainting task to facilitate identity manipulation. Our main insight is that a person's voice carries distinct identity markers, such as age and gender, which provide an essential supplement for identity-aware face inpainting. By extracting identity information from audio as guidance, our method can naturally support tasks of identity preservation and identity swapping in face inpainting. Specifically, we introduce a dual-stream network architecture comprising a face branch and an audio branch. The face branch is tasked with extracting deterministic information from the visible parts of the input masked face, while the audio branch is designed to capture heuristic identity priors from the speaker's voice.
The identity codes from two streams are integrated using a multi-layer perceptron (MLP) to create a virtual unified identity embedding that represennts comprehensive identity features. In addition, to explicitly exploit the information from audio, we introduce an audio-face generator to generate an `fake' audio face directly from audio and fuse the multi-scale intermediate features from the audio-face generator into face inpainting network through an audio-visual feature fusion (AVFF) module. Extensive experiments demonstrate the positive impact of extracting identity information from audio on face inpainting task, especially in identity preservation. | Audio-Driven Identity Manipulation for Face Inpainting | [
"Yuqi Sun",
"Qing Lin",
"Weimin Tan",
"Bo Yan"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=XXimrdFXdp | @inproceedings{
xiong2024realtime,
title={Real-time parameter evaluation of high-speed microfluidic droplets using continuous spike streams},
author={Bo Xiong and Changqing Su and Zihan Lin and Yanqin Chen and You Zhou and Zhen Cheng and Zhaofei Yu and Tiejun Huang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=XXimrdFXdp}
} | Droplet-based microfluidic devices, with their high throughput and low power consumption, have found wide-ranging applications in the life sciences, such as drug discovery and cancer detection. However, the lack of real-time methods for accurately estimating droplet generation parameters has resulted in droplet microfluidic systems remaining largely offline-controlled, making it challenging to achieve efficient feedback in droplet generation. To meet the real-time requirements, it's imperative to minimize the data throughput of the collection system while employing parameter estimation algorithms that are both resource-efficient and highly effective. Spike camera, as an innovative form of neuromorphic camera, facilitates high temporal resolution scene capture with comparatively low data throughput. In this paper, we propose a real-time evaluation method for high-speed droplet parameters based on spike-based microfluidic flow-focusing, named RTDE, that integrates spike camera into the droplet collection system to efficiently capture information using spike stream. To process the spike stream effectively, we develop a spike-based estimation algorithm for real-time droplet generation parameters. To validate the performance of our method, we collected spike-based droplet datasets (SDD), comprising synthetic and real data with varying flow velocities, frequencies, and droplet sizes. Experiments result on these datasets consistently demonstrate that our method achieves parameter estimations that closely match the ground truth values, showcasing high precision. Furthermore, comparative experiments with image-based parameter estimation methods highlight the superior time efficiency of our method, enabling real-time calculation of parameter estimations. | Real-time parameter evaluation of high-speed microfluidic droplets using continuous spike streams | [
"Bo Xiong",
"Changqing Su",
"Zihan Lin",
"Yanqin Chen",
"You Zhou",
"Zhen Cheng",
"Zhaofei Yu",
"Tiejun Huang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=XRMvayh6Vk | @inproceedings{
zhu2024towards,
title={Towards High-resolution 3D Anomaly Detection via Group-Level Feature Contrastive Learning},
author={Hongze Zhu and Guoyang Xie and Chengbin Hou and Tao Dai and Can GAO and Jinbao Wang and Linlin Shen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=XRMvayh6Vk}
} | High-resolution point clouds (HRPCD) anomaly detection (AD) plays a critical role in precision machining and high-end equipment manufacturing. Despite considerable 3D-AD methods that have been proposed recently, they still cannot meet the requirements of the HRPCD-AD task. There are several challenges: i) It is difficult to directly capture HRPCD information due to large amounts of points at the sample level; ii) The advanced transformer-based methods usually obtain anisotropic features, leading to degradation of the representation; iii) The proportion of abnormal areas is very small, which makes it difficult to characterize. To address these challenges, we propose a novel group-level feature-based network, called Group3AD, which has a significantly efficient representation ability. First, we design an Intercluster Uniformity Network (IUN) to present the mapping of different groups in the feature space as several clusters, and obtain a more uniform distribution between clusters representing different parts of the point clouds in the feature space. Then, an Intracluster Alignment Network (IAN) is designed to encourage groups within the cluster to be distributed tightly in the feature space. In addition, we propose an Adaptive Group-Center Selection~(AGCS) based on geometric information to improve the pixel density of potential anomalous regions during inference. The experimental results verify the effectiveness of our proposed Group3AD, which surpasses Reg3D-AD by the margin of 5% in terms of object-level AUROC on Real3D-AD. | Towards High-resolution 3D Anomaly Detection via Group-Level Feature Contrastive Learning | [
"Hongze Zhu",
"Guoyang Xie",
"Chengbin Hou",
"Tao Dai",
"Can GAO",
"Jinbao Wang",
"Linlin Shen"
] | Conference | poster | 2408.04604 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=XOQANk2Irt | @inproceedings{
li2024grace,
title={{GRACE}: {GR}adient-based Active Learning with Curriculum Enhancement for Multimodal Sentiment Analysis},
author={Xinyu Li and Wenqing Ye and Yueyi Zhang and Xiaoyan Sun},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=XOQANk2Irt}
} | Multimodal sentiment analysis (MSA) aims to predict sentiment from text, audio, and visual data of videos. Existing works focus on designing fusion strategies or decoupling mechanisms, which suffer from low data utilization and a heavy reliance on large amounts of labeled data. However, acquiring large-scale annotations for multimodal sentiment analysis is extremely labor-intensive and costly. To address this challenge, we propose GRACE, a GRadient-based Active learning method with Curriculum Enhancement, designed for MSA under a multi-task learning framework. Our approach achieves annotation reduction by strategically selecting valuable samples from the unlabeled data pool while maintaining high-performance levels. Specifically, we introduce informativeness and representativeness criteria, calculated from gradient magnitudes and sample distances, to quantify the active value of unlabeled samples. Additionally, an easiness criterion is incorporated to avoid outliers, considering the relationship between modality consistency and sample difficulty. During the learning process, we dynamically balance sample difficulty and active value, guided by the curriculum learning principle. This strategy prioritizes easier, modality-aligned samples for stable initial training, then gradually increases the difficulty by incorporating more challenging samples with modality conflicts. Extensive experiments demonstrate the effectiveness of our approach on both multimodal sentiment regression and classification benchmarks. | GRACE: GRadient-based Active Learning with Curriculum Enhancement for Multimodal Sentiment Analysis | [
"Xinyu Li",
"Wenqing Ye",
"Yueyi Zhang",
"Xiaoyan Sun"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=XNSbkCsejd | @inproceedings{
wang2024nonoverlapped,
title={Non-Overlapped Multi-View Weak-Label Learning Guided by Multiple Correlations},
author={Kaixiang Wang and Xiaojian Ding and Fan Yang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=XNSbkCsejd}
} | Insufficient labeled training samples pose a critical challenge in multi-label classification, potentially leading to overfitting of the model. This paper delineates a criterion for establishing a common domain among different datasets, whereby datasets sharing analogous object descriptions and label structures are considered part of the same field. Integrating samples from disparate datasets within this shared field for training purposes effectively mitigates overfitting and enhances model accuracy. Motivated by this approach, we introduce a novel method for multi-label classification termed Non-Overlapped Multi-View Weak-Label Learning Guided by Multiple Correlations (NOMWM). Our method strategically amalgamates samples from diverse datasets within the shared field to enrich the training dataset. Furthermore, we project samples from various datasets onto a unified subspace to facilitate learning in a consistent latent space. Additionally, we address the challenge of weak labels stemming from incomplete label overlaps across datasets. Leveraging weak-label indicator matrices and label correlation mining techniques, we effectively mitigate the impact of weak labels. Extensive experimentation on multiple benchmark datasets validates the efficacy of our method, demonstrating clear improvements over existing state-of-the-art approaches. | Non-Overlapped Multi-View Weak-Label Learning Guided by Multiple Correlations | [
"Kaixiang Wang",
"Xiaojian Ding",
"Fan Yang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=XKs7DR9GAK | @inproceedings{
mei2024medical,
title={Medical Report Generation via Multimodal Spatio-Temporal Fusion},
author={Xin Mei and Rui Mao and Xiaoyan Cai and Libin Yang and Erik Cambria},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=XKs7DR9GAK}
} | Medical report generation aims at automating the synthesis of accurate and comprehensive diagnostic reports from radiological images. The task can significantly enhance clinical decision-making and alleviate the workload on radiologists. Existing works normally generate reports from single chest radiographs, although historical examination data also serve as crucial references for radiologists in real-world clinical settings. To address this constraint, we introduce a novel framework that mimics the workflow of radiologists. This framework compares past and present patient images to monitor disease progression and incorporates prior diagnostic reports as references for generating current personalized reports. We tackle the textual diversity challenge in cross-modal tasks by promoting style-agnostic discrete report representation learning and token generation. Furthermore, we propose a novel spatio-temporal fusion method with multi-granularities to fuse textual and visual features by disentangling the differences between current and historical data. We also tackle token generation biases, which arise from long-tail frequency distributions, proposing a novel feature normalization technique. This technique ensures unbiased generation for tokens, whether they are frequent or infrequent, enabling the robustness of report generation for rare diseases. Experimental results on the two public datasets demonstrate that our proposed model outperforms state-of-the-art baselines. | Medical Report Generation via Multimodal Spatio-Temporal Fusion | [
"Xin Mei",
"Rui Mao",
"Xiaoyan Cai",
"Libin Yang",
"Erik Cambria"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=XHaqiDP293 | @inproceedings{
ma2024textregion,
title={Text-Region Matching for Multi-Label Image Recognition with Missing Labels},
author={Leilei Ma and Hongxing Xie and Lei Wang and Yanping Fu and Dengdi Sun and Haifeng Zhao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=XHaqiDP293}
} | Recently, large-scale visual language pre-trained (VLP) models have demonstrated impressive performance across various downstream tasks. Motivated by these advancements, pioneering efforts have emerged in multi-label image recognition with missing labels, leveraging VLP prompt-tuning technology. However, they usually cannot match text and vision features well, due to complicated semantics gaps and missing labels in a multi-label image. To tackle this challenge, we propose Text-Region Matching for optimizing Multi-Label prompt tuning, namely TRM-ML, a novel method for enhancing meaningful cross-modal matching. Compared to existing methods, we advocate exploring the information of category-aware regions rather than the entire image or pixels, which contributes to bridging the semantic gap between textual and visual representations in a one-to-one matching manner. Concurrently, we further introduce multimodal contrastive learning to narrow the semantic gap between textual and visual modalities and establish intra-class and inter-class relationships. Additionally, to deal with missing labels, we propose a multimodal category prototype that leverages intra- and inter-category semantic relationships to estimate unknown labels, facilitating pseudo-label generation. Extensive experiments on the MS-COCO, PASCAL VOC, Visual Genome, NUS-WIDE, and CUB-200-211 benchmark datasets demonstrate that our proposed framework outperforms the current state-of-the-art methods by a significant margin. Our code is available here. | Text-Region Matching for Multi-Label Image Recognition with Missing Labels | [
"Leilei Ma",
"Hongxing Xie",
"Lei Wang",
"Yanping Fu",
"Dengdi Sun",
"Haifeng Zhao"
] | Conference | poster | 2407.18520 | [
"https://github.com/yu-gi-oh-leilei/trm-ml"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=XEwBY1K1ty | @inproceedings{
fan2024pointgcc,
title={Point-{GCC}: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast},
author={Guofan Fan and Zekun Qi and Wenkai Shi and Kaisheng Ma},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=XEwBY1K1ty}
} | Geometry and color information provided by the point clouds are both crucial for 3D scene understanding. Two pieces of information characterize the different aspects of point clouds, but existing methods lack an elaborate design for the discrimination and relevance. Hence we explore a 3D self-supervised paradigm that can better utilize the relations of point cloud information. Specifically, we propose a universal 3D scene pre-training framework via Geometry-Color Contrast (Point-GCC), which aligns geometry and color information using a Siamese network. To take care of actual application tasks, we design (i) hierarchical supervision with point-level contrast and reconstruct and object-level contrast based on the novel deep clustering module to close the gap between pre-training and downstream tasks; (ii) architecture-agnostic backbone to adapt for various downstream models. Benefiting from the object-level representation associated with downstream tasks, Point-GCC can directly evaluate model performance and the result demonstrates the effectiveness of our methods. Transfer learning results on a wide range of tasks also show consistent improvements across all datasets. e.g., new state-of-the-art object detection results on SUN RGB-D and S3DIS datasets. Codes will be released on Github. | Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast | [
"Guofan Fan",
"Zekun Qi",
"Wenkai Shi",
"Kaisheng Ma"
] | Conference | poster | 2305.19623 | [
"https://github.com/asterisci/point-gcc"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=Wzqrle4BzA | @inproceedings{
wu2024diffusion,
title={Diffusion Posterior Proximal Sampling for Image Restoration},
author={Hongjie Wu and Linchao He and Mingqin Zhang and Dongdong Chen and Kunming Luo and Mengting Luo and Ji-Zhe Zhou and Hu Chen and Jiancheng Lv},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Wzqrle4BzA}
} | Diffusion models have demonstrated remarkable efficacy in generating high-quality samples.
Existing diffusion-based image restoration algorithms exploit pre-trained diffusion models to leverage data priors, yet they still preserve elements inherited from the unconditional generation paradigm.
These strategies initiate the denoising process with pure white noise and incorporate random noise at each generative step, leading to over-smoothed results.
In this paper, we present a refined paradigm for diffusion-based image restoration. Specifically, we opt for a sample consistent with the measurement identity at each generative step, exploiting the sampling selection as an avenue for output stability and enhancement. The number of candidate samples used for selection is adaptively determined based on the signal-to-noise ratio of the timestep.
Additionally, we start the restoration process with an initialization combined with the measurement signal, providing supplementary information to better align the generative process.
Extensive experimental results and analyses validate that our proposed method significantly enhances image restoration performance while consuming negligible additional computational resources. | Diffusion Posterior Proximal Sampling for Image Restoration | [
"Hongjie Wu",
"Linchao He",
"Mingqin Zhang",
"Dongdong Chen",
"Kunming Luo",
"Mengting Luo",
"Ji-Zhe Zhou",
"Hu Chen",
"Jiancheng Lv"
] | Conference | oral | 2402.16907 | [
"https://github.com/74587887/dpps_code"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=WsNFULCsyj | @inproceedings{
zhang2024video,
title={Video Anomaly Detection via Progressive Learning of Multiple Proxy Tasks},
author={Menghao Zhang and Jingyu Wang and Qi Qi and Pengfei Ren and Haifeng Sun and Zirui Zhuang and Huazheng Wang and Lei Zhang and Jianxin Liao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=WsNFULCsyj}
} | Learning multiple proxy tasks is a popular training strategy in semi-supervised video anomaly detection. However, the traditional method of learning multiple proxy tasks simultaneously is prone to suboptimal solutions, and simply executing multiple proxy tasks sequentially cannot ensure continuous performance improvement. In this paper, we thoroughly investigate the impact of task composition and training order on performance enhancement. We find that ensuring continuous performance improvement in multi-task learning requires different but continuous optimization objectives in different training phases. To this end, a training strategy based on progressive learning is proposed to enhance the efficacy of multi-task learning in VAD. The learning objectives of the model in previous phases contribute to the training in subsequent phases. Specifically, we decompose video anomaly detection into three phases: perception, comprehension, and inference, continuously refining the learning objectives to enhance model performance. In the three phases, we perform the visual task, the semantic task and the open-set task in turn to train the model. The model learns different levels of features and focuses on different types of anomalies in different phases. Additionally, we design simple yet effective semantic leveraging the semantic consistency of context. Extensive experiments demonstrate the effectiveness of our method, highlighting that the benefits derived from the progressive learning transcend specific proxy tasks. | Video Anomaly Detection via Progressive Learning of Multiple Proxy Tasks | [
"Menghao Zhang",
"Jingyu Wang",
"Qi Qi",
"Pengfei Ren",
"Haifeng Sun",
"Zirui Zhuang",
"Huazheng Wang",
"Lei Zhang",
"Jianxin Liao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Wp4Hkaz2Xe | @inproceedings{
zhang2024not,
title={Not All Frequencies Are Created Equal: Towards a Dynamic Fusion of Frequencies in Time-Series Forecasting},
author={Xingyu Zhang and Siyu Zhao and Zeen Song and Huijie Guo and Jianqi Zhang and Changwen Zheng and Wenwen Qiang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Wp4Hkaz2Xe}
} | Long-term time series forecasting is a long-standing challenge in various applications. A central issue in time series forecasting is that methods should expressively capture long-term dependency. Furthermore, time series forecasting methods should be flexible when applied to different scenarios. Although Fourier analysis offers an alternative to effectively capture reusable and periodic patterns to achieve long-term forecasting in different scenarios, existing methods often assume high-frequency components represent noise and should be discarded in time series forecasting. However, we conduct a series of motivation experiments and discover that the role of certain frequencies varies depending on the scenarios. In some scenarios, removing high-frequency components from the original time series can improve the forecasting performance, while in others scenarios, removing them is harmful to forecasting performance. Therefore, it is necessary to treat the frequencies differently according to specific scenarios. To achieve this, we first reformulate the time series forecasting problem as learning a transfer function of each frequency in the Fourier domain. Further, we design Frequency Dynamic Fusion (FreDF), which individually predicts each Fourier component, and dynamically fuses the output of different frequencies. Moreover, we provide a novel insight into the generalization ability of time series forecasting and propose the generalization bound of time series forecasting. Then we prove FreDF has a lower bound, indicating that FreDF has better generalization ability. Experiment results and ablation studies demonstrate the effectiveness of FreDF. | Not All Frequencies Are Created Equal: Towards a Dynamic Fusion of Frequencies in Time-Series Forecasting | [
"Xingyu Zhang",
"Siyu Zhao",
"Zeen Song",
"Huijie Guo",
"Jianqi Zhang",
"Changwen Zheng",
"Wenwen Qiang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=WjqfAUNDSJ | @inproceedings{
mao2024magedit,
title={{MAG}-Edit: Localized Image Editing in Complex Scenarios via \${\textbackslash}underline\{M\}\$ask-Based \${\textbackslash}underline\{A\}\$ttention-Adjusted \${\textbackslash}underline\{G\}\$uidance},
author={Qi Mao and Lan Chen and Yuchao Gu and Zhen Fang and Mike Zheng Shou},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=WjqfAUNDSJ}
} | Recent diffusion-based image editing approaches have exhibited impressive editing capabilities in images with one dominant object in simple compositions. However, localized editing in images containing multiple objects and intricate compositions has not been well-studied in the literature, despite its growing real-world demands. Existing mask-based inpainting methods fall short of retaining the underlying structure within the edit region, causing noticeable discordance with their complex surroundings. Meanwhile, attention-based methods such as Prompt-to-Prompt (P2P) often exhibit editing leakage and misalignment in more complex compositions. In this work, we propose MAG-Edit, a plug-and-play, inference-stage optimization method, that empowers attention-based editing approaches, such as P2P, to enhance localized image editing in intricate scenarios. In particular, MAG-Edit optimizes the noise latent feature by encouraging two mask-based cross-attention ratios of the edit token, which in turn gradually enhances the local alignment with the desired prompt. Extensive quantitative and qualitative experiments demonstrate the effectiveness of our method in achieving both text alignment and structure preservation for localized editing within complex scenarios. | MAG-Edit: Localized Image Editing in Complex Scenarios via Mask-Based Attention-Adjusted Guidance | [
"Qi Mao",
"Lan Chen",
"Yuchao Gu",
"Zhen Fang",
"Mike Zheng Shou"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Whe0cYAui4 | @inproceedings{
chen2024controllable,
title={Controllable Music Loops Generation with {MIDI} and Text via Multi-Stage Cross Attention and Instrument-Aware Reinforcement Learning},
author={Guan-Yuan Chen and Von-Wun Soo},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Whe0cYAui4}
} | The burgeoning field of text-to-music generation models has shown great promise in their ability to generate high-quality music aligned with users' textual descriptions. These models effectively capture abstract/global musical features such as style and mood. However, they often inadequately produce the precise rendering of critical music loop attributes, including melody, rhythms, and instrumentation, which are essential for modern music loop production. To overcome this limitation, this paper proposed a Loops Transformer and a Multi-Stage Cross Attention mechanism that enable a cohesive integration of textual and MIDI input specifications. Additionally, a novel Instrument-Aware Reinforcement Learning technique was introduced to ensure the correct adoption of instrumentation. We demonstrated that the proposed model can generate music loops that simultaneously satisfy the conditions specified by both natural language texts and MIDI input, ensuring coherence between the two modalities. We also showed that our model outperformed the state-of-the-art baseline model, MusicGen, in both objective metrics (by lowering the FAD score by 1.3, indicating superior quality with lower scores, and by improving the Normalized Dynamic Time Warping Distance with given melodies by 12\%) and subjective metrics (by +2.56\% in OVL, +5.42\% in REL, and +7.74\% in Loop Consistency). These improvements highlight our model's capability to produce musically coherent loops that satisfy the complex requirements of contemporary music production, representing a notable advancement in the field. Generated music loop samples can be explored at: https://loopstransformer.netlify.app/ . | Controllable Music Loops Generation with MIDI and Text via Multi-Stage Cross Attention and Instrument-Aware Reinforcement Learning | [
"Guan-Yuan Chen",
"Von-Wun Soo"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=WhCEsBtJBG | @inproceedings{
chen2024cmt,
title={{CMT}: Co-training Mean-Teacher for Unsupervised Domain Adaptation on 3D Object Detection},
author={Shijie Chen and Junbao Zhuo and Xin Li and Haizhuang Liu and Rongquan Wang and Jiansheng Chen and Huimin Ma},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=WhCEsBtJBG}
} | LiDAR-based 3D detection, as an essential technique in multimedia applications such as augmented reality and autonomous driving, has made great progress in recent years. However, the performance of a well trained 3D detectors is considerably graded when deployed in unseen environments due to the severe domain gap. Traditional unsupervised domain adaptation methods, including co-training and mean-teacher frameworks, do not effectively bridge the domain gap as they struggle with noisy and incomplete pseudo-labels and the inability to capture domain-invariant features. In this work, we introduce a novel Co-training Mean-Teacher (CMT) framework for unsupervised domain adaptation in 3D object detection. Our framework enhances adaptation by leveraging both source and target domain data to construct a hybrid domain that aligns domain-specific features more effectively. We employ hard instance mining to enrich the target domain feature distribution and utilize class-aware contrastive learning to refine feature representations across domains. Additionally, we develop batch adaptive normalization to fine-tune the batch normalization parameters of the teacher model dynamically, promoting more stable and reliable learning. Extensive experiments across various benchmarks, including Waymo, nuScenes and KITTI, demonstrate the superiority of our CMT over the state-of-the-art approaches in different adaptation scenarios. | CMT: Co-training Mean-Teacher for Unsupervised Domain Adaptation on 3D Object Detection | [
"Shijie Chen",
"Junbao Zhuo",
"Xin Li",
"Haizhuang Liu",
"Rongquan Wang",
"Jiansheng Chen",
"Huimin Ma"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=WczgiUZ4QB | @inproceedings{
pang2024dualresolution,
title={Dual-Resolution Fusion Modeling for Unsupervised Cross-Resolution Person Re-Identification},
author={Zhiqi Pang and Lingling Zhao and Chunyu Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=WczgiUZ4QB}
} | Cross-resolution person re-identification (CR-ReID) aims to match images of the same person with different resolutions in different scenarios. Existing CR-ReID methods achieve promising performance by relying on large-scale manually annotated identity labels. However, acquiring manual labels requires considerable human effort, greatly limiting the flexibility of existing CR-ReID methods. To address this issue, we propose a dual-resolution fusion modeling (DRFM) framework to tackle the CR-ReID problem in an unsupervised manner. Firstly, we design a cross-resolution pseudo-label generation (CPG) method, which initially clusters high-resolution images and then obtains reliable identity pseudo-labels by fusing class vectors in both resolution spaces. Subsequently, we develop a cross-resolution feature fusion (CRFF) module to fuse features from both high-resolution and low-resolution spaces. The fusion features have the potential to serve as a new form of resolution-invariant features. Finally, we introduce cross-resolution contrastive loss and probability sharpening loss in DRFM to facilitate resolution-invariant learning and effectively utilize ambiguous samples for optimization. Experimental results on multiple CR-ReID datasets demonstrate that the proposed DRFM not only outperforms existing unsupervised methods but also approaches the performance of early supervised methods. | Dual-Resolution Fusion Modeling for Unsupervised Cross-Resolution Person Re-Identification | [
"Zhiqi Pang",
"Lingling Zhao",
"Chunyu Wang"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=WbbfjojmjD | @inproceedings{
pan2024ravss,
title={{RAVSS}: Robust Audio-Visual Speech Separation in Multi-Speaker Scenarios with Missing Visual Cues},
author={Tianrui Pan and Jie Liu and Bohan Wang and Jie Tang and Gangshan Wu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=WbbfjojmjD}
} | While existing Audio-Visual Speech Separation (AVSS) methods primarily concentrate on the audio-visual fusion strategy for two-speaker separation, they demonstrate a severe performance drop in the multi-speaker separation scenarios. Typically, AVSS methods employ guiding videos to sequentially isolate individual speakers from the given audio mixture, resulting in notable missing and noisy parts across various segments of the separated speech. In this study, we propose a simultaneous multi-speaker separation framework that can facilitate the concurrent separation of multiple speakers within a singular process. We introduce speaker-wise interactions to establish distinctions and correlations among speakers. Experimental results on the VoxCeleb2 and LRS3 datasets demonstrate that our method achieves state-of-the-art performance in separating mixtures with 2, 3, 4, and 5 speakers, respectively. Additionally, our model can utilize speakers with complete audio-visual information to mitigate other visual-deficient speakers, thereby enhancing its resilience to missing visual cues. We also conduct experiments where visual information for specific speakers is entirely absent or visual frames are partially missing. The results demonstrate that our model consistently outperforms others, exhibiting the smallest performance drop across all settings involving 2, 3, 4, and 5 speakers. | RAVSS: Robust Audio-Visual Speech Separation in Multi-Speaker Scenarios with Missing Visual Cues | [
"Tianrui Pan",
"Jie Liu",
"Bohan Wang",
"Jie Tang",
"Gangshan Wu"
] | Conference | poster | 2407.19224 | [
"https://github.com/pantianrui/RAVSS"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=WQr5kbHm8V | @inproceedings{
yin2024flexir,
title={Flex{IR}: Towards Flexible and Manipulable Image Restoration},
author={Zhengwei Yin and Guixu Lin and Mengshun Hu and Hao Zhang and Yinqiang Zheng},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=WQr5kbHm8V}
} | The domain of image restoration encompasses a wide array of highly effective models (e.g., SwinIR, CODE, DnCNN), each exhibiting distinct advantages in either efficiency or performance. Selecting and deploying these models necessitate careful consideration of resource limitations. While some studies have explored dynamic restoration through the integration of an auxiliary network within a unified framework, these approaches often fall short in practical applications due to the complexities involved in training, retraining, and hyperparameter adjustment, as well as limitations as being totally controlled by auxiliary network and biased by training data. To address these challenges, we introduce FlexIR: a flexible and manipulable framework for image restoration. FlexIR is distinguished by three components: a meticulously designed hierarchical branch network enabling dynamic output, an innovative progressive self-distillation process, and a channel-wise evaluation method to enhance knowledge distillation efficiency. Additionally, we propose two novel inference methodologies to fully leverage FlexIR, catering to diverse user needs and deployment contexts. Through this framework, FlexIR achieves unparalleled performance across all branches, allowing users to navigate the trade-offs between quality, cost, and efficiency during the inference phase. Crucially, FlexIR employs a dynamic mechanism powered by a non-learning metric independent of training data, ensuring that FlexIR is entirely under the direct control of the user. Comprehensive experimental evaluations validate FlexIR’s flexibility, manipulability, and cost-effectiveness, showcasing its potential for straightforward adjustments and quick adaptations across a range of scenarios. Codes will be available at [URL]. | FlexIR: Towards Flexible and Manipulable Image Restoration | [
"Zhengwei Yin",
"Guixu Lin",
"Mengshun Hu",
"Hao Zhang",
"Yinqiang Zheng"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=WOp8uEVcFt | @inproceedings{
li2024clickdiff,
title={ClickDiff: Click to Induce Semantic Contact Map for Controllable Grasp Generation with Diffusion Models},
author={Peiming Li and Ziyi Wang and Mengyuan Liu and Hong Liu and Chen Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=WOp8uEVcFt}
} | Grasp generation aims to create complex hand-object interactions with a specified object. While traditional approaches for hand generation have primarily focused on visibility and diversity under scene constraints, they tend to overlook the fine-grained hand-object interactions such as contacts, resulting in inaccurate and undesired grasps. To address these challenges, we propose a controllable grasp generation task and introduce ClickDiff, a controllable conditional generation model that leverages a fine-grained Semantic Contact Map (SCM). Particularly when synthesizing interactive grasps, the method enables the precise control of grasp synthesis through either user-specified or algorithmically predicted Semantic Contact Map. Specifically, to optimally utilize contact supervision constraints and to accurately model the complex physical structure of hands, we propose a Dual Generation Framework. Within this framework, the Semantic Conditional Module generates reasonable contact maps based on fine-grained contact information, while the Contact Conditional Module utilizes contact maps alongside object point clouds to generate realistic grasps. We evaluate the evaluation criteria applicable to controllable grasp generation. Both unimanual and bimanual generation experiments on GRAB and ARCTIC datasets verify the validity of our proposed method, demonstrating the efficacy and robustness of ClickDiff, even with previously unseen objects. Our code is available at https://anonymous.4open.science/r/ClickDiff. | ClickDiff: Click to Induce Semantic Contact Map for Controllable Grasp Generation with Diffusion Models | [
"Peiming Li",
"Ziyi Wang",
"Mengyuan Liu",
"Hong Liu",
"Chen Chen"
] | Conference | oral | 2407.19370 | [
"https://github.com/adventurer-w/clickdiff"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=WOWOgfVYR9 | @inproceedings{
alimohammadzadeh2024swarical,
title={Swarical: An Integrated Hierarchical Approach to Localizing Flying Light Specks},
author={Hamed Alimohammadzadeh and Shahram Ghandeharizadeh},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=WOWOgfVYR9}
} | Swarical, a \underline{Swar}m-based hierarch\underline{ical} localization technique, enables miniature drones, known as Flying Light Specks (FLSs), to accurately and efficiently localize and illuminate complex 2D and 3D shapes. Its accuracy depends on the physical hardware (sensors) of FLSs, which are used to track neighboring FLSs in order to localize themselves. It uses the hardware specification to convert mesh files into point clouds that enable a swarm of FLSs to localize at the highest accuracy afforded by their hardware. Swarical considers a heterogeneous mix of FLSs with different orientations for their tracking sensors, ensuring a line of sight between a localizing FLS and its anchor FLS. We present an implementation using Raspberry cameras and ArUco markers. A comparison of Swarical with a state of the art decentralized localization technique shows that it is as accurate and more than 2x faster. | Swarical: An Integrated Hierarchical Approach to Localizing Flying Light Specks | [
"Hamed Alimohammadzadeh",
"Shahram Ghandeharizadeh"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=WO65vvadrY | @inproceedings{
wang2024decoding,
title={Decoding Urban Industrial Complexity: Enhancing Knowledge-Driven Insights via IndustryScope{GPT}},
author={Siqi Wang and Chao Liang and Yunfan Gao and Liu Yang and Jing Li and Haofen Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=WO65vvadrY}
} | Industrial parks are critical to urban economic growth, blending technology and urban life to foster innovation. Yet, their development often faces challenges due to imbalances between industrial needs and urban services, necessitating strategic planning and operation. This paper presents IndustryScopeKG, a pioneering multi-modal, multi-level large-scale industrial park knowledge graph, and the IndustryScopeGPT framework. By leveraging vast datasets, including corporate, socio-economic, and geospatial information, IndustryScopeKG captures the intricate relationships and semantics of industrial parks, facilitating comprehensive analysis and planning. The IndustryScopeGPT framework, integrating LLMs with Monte Carlo Tree Search, enhances decision-making capabilities, enabling dynamic and adaptable responses to the diverse needs of industrial park planning and operation (IPPO) tasks. Our contributions include the release of the first open-source industrial park knowledge graph, IndustryScopeKG, and the demonstration of the IndustryScopeGPT framework's efficacy in site selection and planning tasks through the IndustryScopeQA benchmark. Our findings highlight the potential of combining LLMs with extensive datasets and innovative frameworks, setting a new standard for research and practice in the field. | Decoding Urban Industrial Complexity: Enhancing Knowledge-Driven Insights via IndustryScopeGPT | [
"Siqi Wang",
"Chao Liang",
"Yunfan Gao",
"Liu Yang",
"Jing Li",
"Haofen Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=WLfWFX7suE | @inproceedings{
fu2024semisupervised,
title={Semi-supervised Camouflaged Object Detection from Noisy Data},
author={Yuanbin Fu and Jie Ying and Houlei Lv and Xiaojie Guo},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=WLfWFX7suE}
} | Most of previous camouflaged object detection methods heavily lean upon large-scale manually-labeled training samples, which are notoriously difficult to obtain. Even worse, the reliability of labels is compromised by the inherent challenges in accurately annotating concealed targets that exhibit high similarities with their surroundings. To overcome these shortcomings, this paper develops the first semi-supervised camouflaged object detection framework, which requires merely a small amount of samples even having noisy/incorrect annotations. Specifically, on the one hand, we introduce an innovative pixel-level loss re-weighting technique to reduce possible negative impacts from imperfect labels, through a window-based voting strategy. On the other hand, we take advantages of ensemble learning to explore robust features against noises/outliers, thereby generating relatively reliable pseudo labels for unlabelled images. Extensive experimental results on four benchmark datasets have been conducted. | Semi-supervised Camouflaged Object Detection from Noisy Data | [
"Yuanbin Fu",
"Jie Ying",
"Houlei Lv",
"Xiaojie Guo"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=WLa8xsjUr1 | @inproceedings{
chen2024embodied,
title={Embodied Contrastive Learning with Geometric Consistency and Behavioral Awareness for Object Navigation},
author={Bolei Chen and Jiaxu Kang and Ping Zhong and Yixiong Liang and Yu Sheng and Jianxin Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=WLa8xsjUr1}
} | Object Navigation (ObjcetNav), which enables an agent to seek any instance of an object category specified by a semantic label, has shown great advances. However, current agents are built upon occlusion-prone visual observations or compressed 2D semantic maps, which hinder their embodied perception of 3D scene geometry and easily lead to ambiguous object localization and blind exploration. To address these limitations, we present an Embodied Contrastive Learning (ECL) method with Geometric Consistency (GC) and Behavioral Awareness (BA), which motivates agents to actively encode 3D scene layouts and semantic cues. Driven by our embodied exploration strategy, BA is modeled by predicting navigational actions based on multi-frame visual images, as behaviors that cause differences between adjacent visual sensations are crucial for learning correlations among continuous visions. The GC is modeled as the alignment of behavior-aware visual stimulus with 3D semantic shapes by employing unsupervised contrastive learning. The aligned behavior-aware visual features and geometric invariance priors are injected into a modular ObjectNav framework to enhance object recognition and exploration capabilities. As expected, our ECL method performs well on object detection and instance segmentation tasks. Our ObjectNav strategy outperforms state-of-the-art methods on MP3D and Gibson datasets, showing the potential of our ECL in embodied navigation. The experimental code is available as supplementary material. | Embodied Contrastive Learning with Geometric Consistency and Behavioral Awareness for Object Navigation | [
"Bolei Chen",
"Jiaxu Kang",
"Ping Zhong",
"Yixiong Liang",
"Yu Sheng",
"Jianxin Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=WGosDPxYmx | @inproceedings{
yin2024adversarial,
title={Adversarial Example Quality Assessment: A Large-scale Dataset and Strong Baseline},
author={Jia-Li Yin and Menghao chen and jin Han and Bo-Hao Chen and Ximeng Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=WGosDPxYmx}
} | Adversarial examples (AEs), which are maliciously hand-crafted by adding perturbations to benign images, reveal the vulnerability of deep neural networks (DNNs) and have been used as a benchmark for evaluating model robustness. With great efforts have been devoted to generating AEs with stronger attack ability, the visual quality of AEs is generally neglected in previous studies. The lack of a good quality measure of AEs makes it very hard to compare the relative merits of attack techniques and is hindering technological advancement. How to evaluate the visual quality of AEs remains an understudied and unsolved problem.
In this work, we make the first attempt to fill the gap by presenting an image quality assessment method specifically designed for AEs. Towards this goal, we first construct a new database, called AdvDB, developed on diverse adversarial examples with elaborated annotations. We also propose a detection-based structural similarity index (AdvDSS) for adversarial example perceptual quality assessment. Specifically, the visual saliency for capturing the near-threshold adversarial distortions is first detected via human visual system (HVS) techniques and then the structural similarity is extracted to predict the quality score. Moreover, we further propose AEQA for overall adversarial example quality assessment by integrating the perceptual quality and attack intensity of AEs. Extensive experiments validate that the proposed AdvDSS achieves state-of-the-art performance which is more consistent with human opinions. | Adversarial Example Quality Assessment: A Large-scale Dataset and Strong Baseline | [
"Jia-Li Yin",
"Menghao chen",
"jin Han",
"Bo-Hao Chen",
"Ximeng Liu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=WDAsva0ddw | @inproceedings{
zhang2024egen,
title={\$E{\textasciicircum}\{3\}\$Gen: Efficient, Expressive and Editable Avatars Generation},
author={Weitian Zhang and Yichao Yan and Yunhui Liu and Xingdong Sheng and Xiaokang Yang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=WDAsva0ddw}
} | This paper aims to introduce 3D Gaussians for efficient, expressive, and editable digital avatar generation.This task faces two major challenges: 1) The unstructured nature of 3D Gaussians makes it incompatible with current generation pipelines; 2) the animation of 3D Gaussians in a generative setting that involves training with multiple subjects remains unexplored. In this paper, we propose a novel avatar generation method named $E^{3}$Gen, to effectively address these challenges. First, we propose a novel generative UV features representation that encodes unstructured 3D Gaussians onto a structured 2D UV space defined by the SMPLX parametric model. This novel representation not only preserves the representation ability of the original 3D Gaussians but also introduces a shared structure among subjects to enable generative learning of the diffusion model. To tackle the second challenge, we propose a part-aware deformation module to achieve robust and accurate full-body expressive pose control. Extensive experiments demonstrate that our method achieves superior performance in avatar generation and enables expressive full-body pose control and editing. | E^3Gen: Efficient, Expressive and Editable Avatars Generation | [
"Weitian Zhang",
"Yichao Yan",
"Yunhui Liu",
"Xingdong Sheng",
"Xiaokang Yang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=WAnTMAMjBx | @inproceedings{
cai2024frequencyaware,
title={Frequency-Aware {GAN} for Imperceptible Transfer Attack on 3D Point Clouds},
author={Xiaowen Cai and Yunbo Tao and Daizong Liu and Pan Zhou and Xiaoye Qu and Jianfeng Dong and Keke Tang and Lichao Sun},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=WAnTMAMjBx}
} | With the development of depth sensors and 3D vision, the vulnerability of 3D point cloud models has garnered heightened concern. Almost all existing 3D attackers are deployed in the white-box setting, where they access the model details and directly optimize coordinate-wise noises to perturb 3D objects. However, realistic 3D applications would not share any model information (model parameters, gradients, etc.) with users. Although a few recent works try to explore the black-box attack, they still achieve limited attack success rates (ASR) and fail to generate high-quality adversarial samples. In this paper, we focus on designing a transfer-based black-box attack method, called Transferable Frequency-aware 3D GAN, to delve into achieving a high black-box ASR by improving the adversarial transferability while making the adversarial samples more imperceptible. Considering that the 3D imperceptibility depends on whether the shape of the object is distorted, we utilize the spectral tool with the GAN design to explicitly perceive and preserve the 3D geometric structures. Specifically, we design the Graph Fourier Transform (GFT) encoding layer in the GAN generator to extract the geometries as guidance, and develop a corresponding Inverse-GFT decoding layer to decode latent features with this guidance to reconstruct high-quality adversarial samples. To further improve the transferability, we develop a dual learning scheme of discriminator from both frequency and feature perspectives to constrain the generator via adversarial learning. Finally, imperceptible and transferable perturbations are rapidly generated by our proposed attack. Experimental results demonstrate that our attack method achieves the highest transfer ASR while exhibiting stronger imperceptibility. | Frequency-Aware GAN for Imperceptible Transfer Attack on 3D Point Clouds | [
"Xiaowen Cai",
"Yunbo Tao",
"Daizong Liu",
"Pan Zhou",
"Xiaoye Qu",
"Jianfeng Dong",
"Keke Tang",
"Lichao Sun"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VyqQtIg5SH | @inproceedings{
ye2024dqformer,
title={{DQ}-Former: Querying Transformer with Dynamic Modality Priority for Cognitive-aligned Multimodal Emotion Recognition in Conversation},
author={Jing Ye and Xinpei Zhao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VyqQtIg5SH}
} | Multimodal Emotion Recognition in Conversations aims to understand the human emotion of each utterance in a conversation from different types of data, such as speech and text.
Previous works mainly focus on either complex unimodal feature extraction or sophisticated fusion techniques as general multimodal classification tasks do.
However, they ignore the process of human perception, neglecting various levels of emotional features within each modality and disregarding the unique contributions of different modalities for emotion recognition.
To address these issues, we propose a more cognitive-aligned multimodal fusion framework, namely DQ-Former.
Specifically, DQ-Former utilizes a small set of learnable query tokens to collate and condense various granularities of emotion cues embedded at different layers of pre-trained unimodal models. Subsequently, it integrates these emotional features from different modalities with dynamic modality priorities at each intermediate fusion layer. This process enables explicit and effective fusion of different levels of information from diverse modalities.
Extensive experiments on MELD and IEMOCAP datasets validate the effectiveness of DQ-Former. Our results show that the proposed method achieves a robust and interpretable multimodal representation for emotion recognition. | DQ-Former: Querying Transformer with Dynamic Modality Priority for Cognitive-aligned Multimodal Emotion Recognition in Conversation | [
"Jing Ye",
"Xinpei Zhao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Vxpn4VmPzI | @inproceedings{
wang2024exploring,
title={Exploring in Extremely Dark: Low-Light Video Enhancement with Real Events},
author={Xicong Wang and Huiyuan Fu and Jiaxuan Wang and Xin Wang and Heng Zhang and Huadong Ma},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Vxpn4VmPzI}
} | Due to the limitations of sensor, traditional cameras struggle to capture details within extremely dark areas of videos. The absence of such details can significantly impact the effectiveness of low-light video enhancement. In contrast, event cameras offer a visual representation with higher dynamic range, facilitating the capture of motion information even in exceptionally dark conditions. Motivated by this advantage, we propose the Real-Event Embedded Network for low-light video enhancement. To better utilize events for enhancing extremely dark regions, we propose an Event-Image Fusion module, which can identify these dark regions and enhance them significantly. To ensure temporal stability of the video and restore details within extremely dark areas, we design unsupervised temporal consistency loss and detail contrast loss. Alongside the supervised loss, these loss functions collectively contribute to the semi-supervised training of the network on unpaired real data. Experimental results on synthetic and real data demonstrate the superiority of the proposed method compared to the state-of-the-art methods. Our codes will be publicly available. | Exploring in Extremely Dark: Low-Light Video Enhancement with Real Events | [
"Xicong Wang",
"Huiyuan Fu",
"Jiaxuan Wang",
"Xin Wang",
"Heng Zhang",
"Huadong Ma"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VtPjK2D0rL | @inproceedings{
pan2024disentangledmultimodal,
title={Disentangled-Multimodal Privileged Knowledge Distillation for Depression Recognition with Incomplete Multimodal Data},
author={Yuchen Pan and Junjun Jiang and Kui Jiang and Xianming Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VtPjK2D0rL}
} | Depression recognition (DR) using facial images, audio signals, or language text recordings has achieved remarkable performance. Recently, multimodal DR has shown improved performance over single-modal methods by leveraging information from a combination of these modalities. However, collecting high-quality data containing all modalities poses a challenge. In particular, these methods often encounter performance degradation when certain modalities are either missing or degraded. To tackle this issue, we present a generalizable multimodal framework for DR by aggregating feature disentanglement and privileged knowledge distillation. In detail, our approach aims to disentangle homogeneous and heterogeneous features within multimodal signals while suppressing noise, thereby adaptively aggregating the most informative components for high-quality DR. Subsequently, we leverage knowledge distillation to transfer privileged knowledge from complete modalities to the observed input with limited information, thereby significantly improving the tolerance and compatibility. These strategies form our novel Feature Disentanglement and Privileged knowledge Distillation Network for DR, dubbed Dis2DR. Experimental evaluations on AVEC 2013, AVEC 2014, AVEC 2017, and AVEC 2019 datasets demonstrate the effectiveness of our Dis2DR method. Remarkably, Dis2DR achieves superior performance even when only a single modality is available, surpassing existing state-of-the-art multimodal DR approaches AVA-DepressNet by up to 9.8% on the AVEC 2013 dataset. | Disentangled-Multimodal Privileged Knowledge Distillation for Depression Recognition with Incomplete Multimodal Data | [
"Yuchen Pan",
"Junjun Jiang",
"Kui Jiang",
"Xianming Liu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VstrrJTQV0 | @inproceedings{
zhang2024explore,
title={Explore Hybrid Modeling for Moving Infrared Small Target Detection},
author={Mingjin Zhang and Shilong Liu and Yuanjun Ouyang and Jie Guo and Zhihong Tang and Yunsong Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VstrrJTQV0}
} | Moving infrared small target detection, crucial in contexts like traffic management and maritime rescue, encounters challenges from factors such as complex backgrounds, target occlusion, camera shake, and motion blur. Existing algorithms fall short in comprehensively addressing these issues by finding mathematical models, impeding generalization in complex and dynamic motion scenes. In this paper, we propose a method for finding models of moving infrared small target detection via smoothed-particle hydrodynamics (SPH) and Markov decision processes (MDP). SPH can simulate the motion trajectories of targets and background scenes, while MDP can optimize detection system strategies for optimal action selection based on contexts and target states. Specifically, we develop an SPH-inspired image-level enhancement algorithm which models the image sequence of infrared video as a 3D spatiotemporal graph in SPH. In addition, we design an MDP-guided temporal feature perception module. This module selects reference frames, aggregates features from both reference frames and the current frame. The previous and current frames are modeled as an MDP tailored for multi-frame infrared small target detection tasks, aiding in detecting the current frame. Conducted extensive experiments on two public dataset: DAUB and DATR, the proposed STME-Net surpasses the state-of-the-art methods in terms of objective metrics and visual quality. | Explore Hybrid Modeling for Moving Infrared Small Target Detection | [
"Mingjin Zhang",
"Shilong Liu",
"Yuanjun Ouyang",
"Jie Guo",
"Zhihong Tang",
"Yunsong Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Vs3KRr6tjU | @inproceedings{
quan2024enhancing,
title={Enhancing Underwater Images via Asymmetric Multi-Scale Invertible Networks},
author={Yuhui Quan and Xiaoheng Tan and Yan Huang and Yong Xu and Hui Ji},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Vs3KRr6tjU}
} | Underwater images, often plagued by complex degradation, pose significant challenges for image enhancement. To address these challenges, the paper redefines underwater image enhancement as an image decomposition problem and proposes a deep invertible neural network (INN) that accurately predicts both the latent image and the degradation effects. Instead of using an explicit formation model to describe the degradation process, the INN adheres to the constraints of the image decomposition model, providing necessary regularization for model training, particularly in the absence of supervision on degradation effects. Taking into account the diverse scales of degradation factors, the INN is structured on a multi-scale basis to effectively manage the varied scales of degradation factors. Moreover, the INN incorporates several asymmetric design elements that are specifically optimized for the decomposition model and the unique physics of underwater imaging. Comprehensive experiments show that our approach provides significant performance improvement over existing methods. | Enhancing Underwater Images via Asymmetric Multi-Scale Invertible Networks | [
"Yuhui Quan",
"Xiaoheng Tan",
"Yan Huang",
"Yong Xu",
"Hui Ji"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VrcR1LLxJO | @inproceedings{
zhang2024an,
title={An Entailment Tree Generation Approach for Multimodal Multi-Hop Question Answering with Mixture-of-Experts and Iterative Feedback Mechanism},
author={Qing Zhang and Haocheng Lv and Jie Liu and Zhiyun Chen and JianYong Duan and Hao Wang and Li He and Mingying Xu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VrcR1LLxJO}
} | With the rise of large-scale language models(LLMs), it is currently popular and effective to convert multimodal information into text descriptions for multimodal multi-hop question answering. However, we argue that the current methods of multi-modal multi-hop question answering still mainly face two challenges: 1) The retrieved evidence containing a large amount of redundant information, inevitably leads to a significant drop in performance due to irrelevant information misleading the prediction. 2) The reasoning process without interpretable reasoning steps makes the model difficult to discover the logical errors for handling complex questions. To solve these problems, we propose a unified LLMs-based approach but wihout heavily relying on them due to the LLM's potential errors, and innovatively treat multimodal multi-hop question answering as a joint entailment tree generation and question answering problem. Specifically, we design a multi-task learning framework with a focus on facilitating common knowledge sharing across interpretability and prediction tasks while preventing task-specific errors from interfering with each other via mixture of experts. Afterward, we design an iterative feedback mechanism to further enhance both tasks by feeding back the results of the joint training to the LLM for regenerating entailment trees, aiming to iteratively refine the potential answer. Notably, our method has \textbf{won the first place} in the official leaderboards of WebQA (since April 10, 2024), and achieving competitive results on MultimodalQA. | An Entailment Tree Generation Approach for Multimodal Multi-Hop Question Answering with Mixture-of-Experts and Iterative Feedback Mechanism | [
"Qing Zhang",
"Haocheng Lv",
"Jie Liu",
"Zhiyun Chen",
"JianYong Duan",
"Hao Wang",
"Li He",
"Mingying Xu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VnaNemZgPj | @inproceedings{
zhan2024satpose,
title={{SATP}ose: Improving Monocular 3D Pose Estimation with Spatial-aware Ground Tactility},
author={Lishuang Zhan and Enting Ying and Jiabao Gan and Shihui Guo and BoYu Gao and Yipeng Qin},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VnaNemZgPj}
} | Estimating 3D human poses from monocular images is an important research area with many practical applications. However, the depth ambiguity of 2D solutions limits their accuracy in actions where occlusion exits or where slight centroid shifts can result in significant 3D pose variations. In this paper, we introduce a novel multimodal approach to mitigate the depth ambiguity inherent in monocular solutions by integrating spatial-aware pressure information. To achieve this, we first establish a data collection system with a pressure mat and a monocular camera, and construct a large-scale multimodal human activity dataset comprising over 600,000 frames of motion data. Utilizing this dataset, we propose a pressure image reconstruction network to extract pressure priors from monocular images. Subsequently, we introduce a Transformer-based multimodal pose estimation network to combine pressure priors with monocular images, achieving a world mean per joint position error (W-MPJPE) of 51.6mm, outperforming state-of-the-art methods. Extensive experiments demonstrate the effectiveness of our multimodal 3D human pose estimation method across various actions and joints, highlighting the significance of spatial-aware pressure in improving the accuracy of monocular 3D pose estimation methods. Our dataset is available at: https://anonymous.4open.science/r/SATPose-51DD. | SATPose: Improving Monocular 3D Pose Estimation with Spatial-aware Ground Tactility | [
"Lishuang Zhan",
"Enting Ying",
"Jiabao Gan",
"Shihui Guo",
"BoYu Gao",
"Yipeng Qin"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VmSRPbpD9c | @inproceedings{
kangpeng2024interactive,
title={Interactive Segmentation by Considering First-Click Intentional Ambiguity},
author={Kangpeng Hu and Sun Quansen and Yinghui Sun and Tao Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VmSRPbpD9c}
} | Interactive segmentation task aims at taking into account the influence of user preferences on the basis of general semantic segmentation in order to obtain the specific target-of-interest. Given the fact that most of the related algorithms generate a single mask only, the robustness of which might be constrained due to the diversity of user intention in the early interaction stage, namely the vague selection of object part/whole object/adherent object, especially when there's only one click. To handle this, we propose a novel framework called Diversified Interactive Segmentation Network (DISNet) in which we revisit the peculiarity of first click: given an input image, DISNet outputs multiple candidate masks under the guidance of first click only, it then utilizes a Dual-attentional Mask Correction (DAMC) module consisting of two branches: a) Masked attention based on click propagation. b) Mixed attention within first click, subsequent clicks and image w.r.t. position and feature space. Moreover, we design a new sampling strategy to generate GT masks with rich semantic relations. The comparison between DISNet and mainstream algorithms demonstrates the efficacy of our methods, which further exemplifies the decisive role of first click in the realm of interactive segmentation. | Interactive Segmentation by Considering First-Click Intentional Ambiguity | [
"Kangpeng Hu",
"Sun Quansen",
"Yinghui Sun",
"Tao Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VljEshSNa6 | @inproceedings{
shen2024multilabel,
title={Multi-Label Learning with Block Diagonal Labels},
author={Leqi Shen and Sicheng Zhao and Yifeng Zhang and Hui Chen and Jundong Zhou and pengzhang liu and Yongjun Bao and Guiguang Ding},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VljEshSNa6}
} | Collecting large-scale multi-label data with \emph{full labels} is difficult for real-world scenarios. Many existing studies have tried to address the issue of missing labels caused by annotation but ignored the difficulties encountered during the annotation process. We find that the high annotation workload can be attributed to two reasons: (1) Annotators are required to identify labels on widely varying visual concepts. (2) Exhaustively annotating the entire dataset with all the labels becomes notably difficult and time-consuming. In this paper, we propose a new setting, i.e. block diagonal labels, to reduce the workload on both sides. The numerous categories can be divided into different subsets based on semantics and relevance. Each annotator can only focus on its own subset of labels so that only a small set of highly relevant labels are required to be annotated per image. To deal with the issue of such \emph{missing labels}, we introduce a simple yet effective method that does not require any prior knowledge of the dataset. In practice, we propose an Adaptive Pseudo-Labeling method to predict the unknown labels with less noise. Formal analysis is conducted to evaluate the superiority of our setting. Extensive experiments are conducted to verify the effectiveness of our method on multiple widely used benchmarks. | Multi-Label Learning with Block Diagonal Labels | [
"Leqi Shen",
"Sicheng Zhao",
"Yifeng Zhang",
"Hui Chen",
"Jundong Zhou",
"pengzhang liu",
"Yongjun Bao",
"Guiguang Ding"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VkiacHTDs0 | @inproceedings{
liu2024openset,
title={Open-Set Video-based Facial Expression Recognition with Human Expression-sensitive Prompting},
author={Yuanyuan Liu and Yuxuan Huang and Shuyang Liu and Yibing Zhan and Zijing Chen and Zhe Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VkiacHTDs0}
} | In Video-based Facial Expression Recognition (V-FER), models are typically trained on closed-set datasets with a fixed number of known classes. However, these models struggle with unknown classes common in real-world scenarios. In this paper, we introduce a challenging Open-set Video-based Facial Expression Recognition (OV-FER) task, aiming to identify both known and new, unseen facial expressions. While existing approaches use large-scale vision-language models like CLIP to identify unseen classes, we argue that these methods may not adequately capture the subtle human expressions needed for OV-FER. To address this limitation, we propose a novel Human Expression-Sensitive Prompting (HESP) mechanism to significantly enhance CLIP's ability to model video-based facial expression details effectively. Our proposed HESP comprises three components: 1) a textual prompting module with learnable prompts to enhance CLIP's textual representation of both known and unknown emotions, 2) a visual prompting module that encodes temporal emotional information from video frames using expression-sensitive attention, equipping CLIP with a new visual modeling ability to extract emotion-rich information, and 3) an open-set multi-task learning scheme that promotes interaction between the textual and visual modules, improving the understanding of novel human emotions in video sequences. Extensive experiments conducted on four OV-FER task settings demonstrate that HESP can significantly boost CLIP's performance (a relative improvement of 17.93% on AUROC and 106.18% on OSCR) and outperform other state-of-the-art open-set video understanding methods by a large margin. Code is available at https://github.com/cosinehuang/HESP. | Open-Set Video-based Facial Expression Recognition with Human Expression-sensitive Prompting | [
"Yuanyuan Liu",
"Yuxuan Huang",
"Shuyang Liu",
"Yibing Zhan",
"Zijing Chen",
"Zhe Chen"
] | Conference | poster | 2404.17100 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=VcovhnCKSt | @inproceedings{
luo2024video,
title={Video Bokeh Rendering: Make Casual Videography Cinematic},
author={Yawen Luo and Min Shi and Liao Shen and Yachuan Huang and Zixuan Ye and Juewen Peng and Zhiguo Cao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VcovhnCKSt}
} | Bokeh is a wide-aperture optical effect that creates aesthetic blurring in photography. However, achieving this effect typically demands expensive professional equipment and expertise. To make such cinematic techniques more accessible, bokeh rendering aims to generate the desired bokeh effects from all-in-focus inputs captured by smartphones. Previous efforts in bokeh rendering primarily focus on static images. However, when extended to video inputs, these methods exhibit flicker and artifacts due to a lack of temporal consistency modeling. Meanwhile, they cannot utilize information like occluded objects from adjacent frames, which are necessary for bokeh rendering. Moreover, the difficulties of capturing all-in-focus and bokeh video pairs result in a shortage of data for training video bokeh models. To tackle these challenges, we propose the Video Bokeh Renderer (VBR), the model designed specifically for video bokeh rendering. VBR leverages implicit feature space alignment and aggregation to model temporal consistency and exploit complementary information from adjacent frames. On the data front, we introduce the first Synthetic Video Bokeh (SVB) dataset, synthesizing authentic bokeh effects using ray-tracing techniques. Furthermore, to improve the robustness of the model to inaccurate disparity maps, we employ a set of augmentation strategies to simulate corrupted disparity inputs during training. Experimental results on both synthetic and real-world data demonstrate the effectiveness of our method. Code and dataset will be released. | Video Bokeh Rendering: Make Casual Videography Cinematic | [
"Yawen Luo",
"Min Shi",
"Liao Shen",
"Yachuan Huang",
"Zixuan Ye",
"Juewen Peng",
"Zhiguo Cao"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VcQpwz0wIJ | @inproceedings{
he2024hierarchical,
title={Hierarchical Perceptual and Predictive Analogy-Inference Network for Abstract Visual Reasoning},
author={Wentao He and Jianfeng Ren and Ruibin Bai and Xudong Jiang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VcQpwz0wIJ}
} | Advances in computer vision research enable human-like high-dimensional perceptual induction over analogical visual reasoning problems, such as Raven's Progressive Matrices (RPMs). In this paper, we propose a Hierarchical Perception and Predictive Analogy-Inference network (HP$^2$AI), consisting of three major components that tackle key challenges of RPM problems. Firstly, in view of the limited receptive fields of shallow networks in most existing RPM solvers, a perceptual encoder is proposed, consisting of a series of hierarchically coupled Patch Attention and Local Context (PALC) blocks, which could capture local attributes at early stages and capture the global panel layout at deep stages. Secondly, most methods seek for object-level similarities to map the context images directly to the answer image, while failing to extract the underlying analogies. The proposed reasoning module, Predictive Analogy-Inference (PredAI), consists of a set of Analogy-Inference Blocks (AIBs) to model and exploit the inherent analogical reasoning rules instead of object similarity. Lastly, the Squeeze-and-Excitation Channel-wise Attention (SECA) in the proposed PredAI discriminates essential attributes and analogies from irrelevant ones. Extensive experiments over four benchmark RPM datasets show that the proposed HP$^2$AI achieves significant performance gains over all the state-of-the-art methods consistently on all four datasets. | Hierarchical Perceptual and Predictive Analogy-Inference Network for Abstract Visual Reasoning | [
"Wentao He",
"Jianfeng Ren",
"Ruibin Bai",
"Xudong Jiang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VZXdvxWDvq | @inproceedings{
zhu2024kebr,
title={{KEBR}: Knowledge Enhanced Self-Supervised Balanced Representation for Multimodal Sentiment Analysis},
author={Aoqiang Zhu and Min Hu and Xiaohua Wang and Jiaoyun Yang and Yiming Tang and Fuji Ren},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VZXdvxWDvq}
} | Multimodal sentiment analysis (MSA) aims to integrate multiple modalities of information to better understand human sentiment. The current research mainly focuses on conducting multimodal fusion and representation learning, which neglects the under-optimized modal representations generated by the imbalance of unimodal performances in joint learning. Moreover, the size of labeled datasets limits the generalization ability of existing supervised models used in MSA. To address the above issues, this paper proposes a knowledge-enhanced self-supervised balanced representation approach (KEBR) to capture common sentimental knowledge in unlabeled videos and explore the optimization issue of information imbalance between modalities. First, a text-based cross-modal fusion method (TCMF) is constructed, which injects the non-verbal information from the videos into the semantic representation of text to enhance the multimodal representation of text. Then, a multimodal cosine constrained loss (MCC) is designed to constrain the fusion of non-verbal information in joint learning to balance the representation of multimodal information. Finally, with the help of sentiment knowledge and non-verbal information, KEBR conducts sentiment word masking and sentiment intensity prediction, so that the sentiment knowledge in the videos is embedded into the pre-trained multimodal representation in a balanced manner. Experimental results on two publicly available datasets MOSI and MOSEI show that KEBR significantly outperforms the baseline, achieving new state-of-the-art results. | KEBR: Knowledge Enhanced Self-Supervised Balanced Representation for Multimodal Sentiment Analysis | [
"Aoqiang Zhu",
"Min Hu",
"Xiaohua Wang",
"Jiaoyun Yang",
"Yiming Tang",
"Fuji Ren"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VZNzaVpWqb | @inproceedings{
li2024sparseformer,
title={SparseFormer: Detecting Objects in {HRW} Shots via Sparse Vision Transformer},
author={Wenxi Li and Yuchen Guo and Jilai Zheng and Haozhe Lin and Chao Ma and LU FANG and Xiaokang Yang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VZNzaVpWqb}
} | Recent years have seen an increase in the use of gigapixel-level image and video capture systems and benchmarks with high-resolution wide (HRW) shots. However, unlike close-up shots in the MS COCO dataset, the higher resolution and wider field of view raise unique challenges, such as extreme sparsity and huge scale changes, causing existing close-up detectors inaccuracy and inefficiency. In this paper, we present a novel model-agnostic sparse vision transformer, dubbed SparseFormer, to bridge the gap of object detection between close-up and HRW shots. The proposed SparseFormer selectively uses attentive tokens to scrutinize the sparsely distributed windows that may contain objects. In this way, it can jointly explore global and local attention by fusing coarse- and fine-grained features to handle huge scale changes. SparseFormer also benefits from a novel Cross-slice non-maximum suppression (C-NMS) algorithm to precisely localize objects from noisy windows and a simple yet effective multi-scale strategy to improve accuracy. Extensive experiments on two HRW benchmarks, PANDA and DOTA-v1.0, demonstrate that the proposed SparseFormer significantly improves detection accuracy (up to 5.8%) and speed (up to 3x) over the state-of-the-art approaches. | SparseFormer: Detecting Objects in HRW Shots via Sparse Vision Transformer | [
"Wenxi Li",
"Yuchen Guo",
"Jilai Zheng",
"Haozhe Lin",
"Chao Ma",
"LU FANG",
"Xiaokang Yang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VPXAhSMYXF | @inproceedings{
zhan2024free,
title={Free Lunch: Frame-level Contrastive Learning with Text Perceiver for Robust Scene Text Recognition in Lightweight Models},
author={Hongjian Zhan and yangfu Li and Xiong Yu-Jie and Umapada Pal and Yue Lu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VPXAhSMYXF}
} | Lightweight models play an important role in real-life applications, especially in the recent mobile device era. However, due to limited network scale and low-quality images, the performance of lightweight models on Scene Text Recognition (STR) tasks is still much to be improved. Recently, contrastive learning has shown its power in many areas, with promising performances without additional computational cost. Based on these observations, we propose a new efficient and effective frame-level contrastive learning (FLCL) framework for lightweight STR models. The FLCL framework consists of a backbone to extract basic features, a Text Perceiver Module (TPM) to focus on text-relevant representations, and a FLCL loss to update the network. The backbone can be any feature extraction architecture. The TPM is an innovative Mamba-based structure that is designed to suppress features irrelevant to the text content from the backbone. Unlike existing word-level contrastive learning, we look into the nature of the STR task and propose the frame-level contrastive learning loss, which can work well with the famous Connectionist Temporal Classification loss. We conduct experiments on six well-known STR benchmarks as well as a new low-quality dataset. Compared to vanilla contrastive learning and other non-parameter methods, the FLCL framework significantly outperforms others on all datasets, especially the low-quality dataset. In addition, character feature visualization demonstrates that the proposed method can yield more discriminative character features for visually similar characters, which also substantiates the efficacy of the proposed methods. Codes and the low-quality dataset will be available soon. | Free Lunch: Frame-level Contrastive Learning with Text Perceiver for Robust Scene Text Recognition in Lightweight Models | [
"Hongjian Zhan",
"yangfu Li",
"Xiong Yu-Jie",
"Umapada Pal",
"Yue Lu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VMdcfUn5Ms | @inproceedings{
wang2024advqdet,
title={Adv{QD}et: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning},
author={Xin Wang and Kai Chen and Xingjun Ma and Zhineng Chen and Jingjing Chen and Yu-Gang Jiang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VMdcfUn5Ms}
} | Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks even under a black-box setting where the adversary can only query the model. Particularly, query-based black-box adversarial attacks estimate adversarial gradients based on the returned probability vectors of the target model for a sequence of queries. During this process, the queries made to the target model are intermediate adversarial examples crafted at the previous attack step, which share high similarities in the pixel space. Motivated by this observation, stateful detection methods have been proposed to detect and reject query-based attacks. While demonstrating promising results, these methods either have been evaded by more advanced attacks or suffer from low efficiency in terms of the number of shots (queries) required to detect different attacks. Arguably, the key challenge here is to assign high similarity scores for any two intermediate adversarial examples perturbed from the same image. To address this challenge, we propose a novel Adversarial Contrastive Prompt Tuning (ACPT) method to robustly fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries. With ACPT, we further introduce a detection framework AdvDet that can detect 7 state-of-the-art query-based attacks with >99% detection rate within 5 shots. We also show that ACPT is robust to 3 types of adaptive attacks. | AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning | [
"Xin Wang",
"Kai Chen",
"Xingjun Ma",
"Zhineng Chen",
"Jingjing Chen",
"Yu-Gang Jiang"
] | Conference | poster | 2408.01978 | [
"https://github.com/xinwong/advqdet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=VIPZtona4Q | @inproceedings{
bo2024towards,
title={Towards Medical Vision-Language Contrastive Pre-training via Study-Oriented Semantic Exploration},
author={LIU BO and LU ZEXIN and Yan Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VIPZtona4Q}
} | Contrastive vision-language pre-training has shown great promise in representation transfer learning and cross-modality learning in the medical field. However, without fully exploiting the intrinsic properties and correlations of multimodal medical data within patient studies, current research fails to explore all the potential of available data, leading to suboptimal performance on representation learning. In this paper, we propose a novel pre-training framework for learning better medical vision-language embedding, oriented on patients' study-level data. Based on the order-agnostic property of radiology report, we adopt a two-stage feature extraction method for more representative textual characterization. Then, by leveraging momentum encoders and memory queues, study-level semantics are explored with three contrastive objectives to provide comprehensive supervision from three perspectives, i.e., cross-modal, multi-modal, and uni-modal, such that the potential information neglected by previous research can be fully exploited. The superiority of the proposed framework is demonstrated by the impressive improvements on four typical downstream tasks, including zero-shot/data-efficient image classification, image segmentation, and cross-modal retrieval. | Towards Medical Vision-Language Contrastive Pre-training via Study-Oriented Semantic Exploration | [
"LIU BO",
"LU ZEXIN",
"Yan Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VEHNTupyIU | @inproceedings{
yang2024hid,
title={Hi3D: Pursuing High-Resolution Image-to-3D Generation with Video Diffusion Models},
author={Haibo Yang and Yang Chen and Yingwei Pan and Ting Yao and Zhineng Chen and Chong-Wah Ngo and Tao Mei},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VEHNTupyIU}
} | Despite having tremendous progress in image-to-3D generation, existing methods still struggle to produce multi-view consistent images with high-resolution textures in detail, especially in the paradigm of 2D diffusion that lacks 3D awareness. In this work, we present High-resolution Image-to-3D model (Hi3D), a new video diffusion based paradigm that redefines a single image to multi-view images as 3D-aware sequential image generation (i.e., orbital video generation). This methodology delves into the underlying temporal consistency knowledge in video diffusion model that generalizes well to geometry consistency across multiple views in 3D generation. Technically, Hi3D first empowers the pre-trained video diffusion model with 3D-aware prior (camera pose condition), yielding multi-view images with low-resolution texture details. A 3D-aware video-to-video refiner is learnt to further scale up the multi-view images with high-resolution texture details. Such high-resolution multi-view images are further augmented with novel views through 3D Gaussian Splatting, which are finally leveraged to obtain high-fidelity meshes via 3D reconstruction. Extensive experiments on both novel view synthesis and single view reconstruction demonstrate that our Hi3D manages to produce superior multi-view consistency images with highly-detailed textures. | Hi3D: Pursuing High-Resolution Image-to-3D Generation with Video Diffusion Models | [
"Haibo Yang",
"Yang Chen",
"Yingwei Pan",
"Ting Yao",
"Zhineng Chen",
"Chong-Wah Ngo",
"Tao Mei"
] | Conference | poster | 2409.07452 | [
"https://github.com/yanghb22-fdu/hi3d-official"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=VEGCPbt8H9 | @inproceedings{
lv2024sarslam,
title={{SAR}-{SLAM}: Self-Attentive Rendering-based {SLAM} with Neural Point Cloud Encoding},
author={Xudong Lv and Zhiwei He and Yuxiang Yang and Jiahao Nie and Jing Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VEGCPbt8H9}
} | Neural implicit representations have recently revolutionized simultaneous localization and mapping (SLAM), giving rise to a groundbreaking paradigm known as NeRF-based SLAM. However, existing methods often fall short in accurately estimating poses and reconstructing scenes. This limitation largely stems from their reliance on volume rendering techniques, which oversimplify the modeling process. In this paper, we introduce a novel neural implicit SLAM system designed to address these shortcomings. Our approach reconstructs Neural Radiance Fields (NeRFs) using a self-attentive architecture and represents scenes through neural point cloud encoding. Unlike previous NeRF-based SLAM methods, which depend on traditional volume rendering equations for scene representation and view synthesis, our method employs a self-attentive rendering framework with the Transformer architecture during mapping and tracking stages. To enable incremental mapping, we anchor scene features within a neural point cloud, striking a balance between estimation accuracy and computational cost. Experimental results across three challenging datasets demonstrate the superior performance and robustness of our proposed approach compared to recent NeRF-based SLAM systems. The code will be released. | SAR-SLAM: Self-Attentive Rendering-based SLAM with Neural Point Cloud Encoding | [
"Xudong Lv",
"Zhiwei He",
"Yuxiang Yang",
"Jiahao Nie",
"Jing Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VDd4Yx9eoG | @inproceedings{
liu2024adaptively,
title={Adaptively Building a Video-language Model for Video Captioning and Retrieval without Massive Video Pretraining},
author={Zihao Liu and Xiaoyu Wu and Shengjin Wang and Jiayao Qian},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=VDd4Yx9eoG}
} | Large-scale pretrained image-language models have shown remarkable performance recently. However, building a video-language model is more challenging due to the complexity of video and the difficulty of collecting high-quality data. This paper builds a video-language model in an adaptive manner, which transfers the knowledge from the image domain and can achieve state-of-the-art performance without any further massive video pretraining. The main contributions include a Visual Perception Adapter that seamlessly and efficiently adapts a pretrained image-language model to the video domain and a fine-grained contrastive learning with Inter-modal Token Alignment that bridges semantic gaps between vision, audio, and language with less data. The proposed model is evaluated on video captioning and retrieval. Experiments demonstrate that the proposed model exhibits competitive performance compared to models pretrained on millions of video-text pairs. Notably, our model's CIDEr and R@1 scores on the MSR-VTT dataset exceed the existing state-of-the-art by 6.3\% and 1.3\%. | Adaptively Building a Video-language Model for Video Captioning and Retrieval without Massive Video Pretraining | [
"Zihao Liu",
"Xiaoyu Wu",
"Shengjin Wang",
"Jiayao Qian"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=V5HU1OvHnx | @inproceedings{
zhang2024sceneexpander,
title={SceneExpander: Real-Time Scene Synthesis for Interactive Floor Plan Editing},
author={Shao-Kui Zhang and Junkai Huang and Liang Yue and Jia-Tong Zhang and Jia-Hong Liu and Yu-Kun Lai and Song-Hai Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=V5HU1OvHnx}
} | Scene synthesis has gained significant attention recently, and interactive scene synthesis focuses on yielding scenes according to user preferences. Existing literature either generates floor plans or scenes according to the floor plans. This paper generates scenes over floor plans in real time. Given an initial scene, the only interaction a user needs is changing the room shapes. Our framework splits/merges rooms and adds/rearranges/removes objects for each transient moment during interactions. A systematic pipeline achieves our framework by compressing objects' arrangements over modified room shapes in a transient moment, thus enabling real-time performances. We also propose elastic boxes that indicate how objects should be arranged according to their continuously changed contexts, such as room shapes and other objects. Through a few interactions, a floor plan filled with object layouts is generated concerning user preferences on floor plans and object layouts according to floor plans. Experiments show that our framework is efficient at user interactions and plausible for synthesizing 3D scenes. | SceneExpander: Real-Time Scene Synthesis for Interactive Floor Plan Editing | [
"Shao-Kui Zhang",
"Junkai Huang",
"Liang Yue",
"Jia-Tong Zhang",
"Jia-Hong Liu",
"Yu-Kun Lai",
"Song-Hai Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Uz7a4N6aKi | @inproceedings{
guo2024bcscnreducing,
title={{BCSCN}:Reducing Domain Gap through B\'ezier Curve basis-based Sparse Coding Network for Single-Image Super-Resolution},
author={Wenhao Guo and Peng Lu and Xujun Peng and Zhaoran Zhao and Ji Qiu and XiangTao Dong},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Uz7a4N6aKi}
} | Single Image Super-Resolution (SISR) is a pivotal challenge in computer vision, aiming to restore high-resolution (HR) images from their low-resolution (LR) counterparts. The presence of diverse degradation kernels creates a significant domain gap, limiting the effective generalization of models in real-world scenarios. This study introduces the Bézier Curve basis-based Sparse Coding Network (BCSCN), a preprocessing network designed to mitigate input distribution discrepancies between the training and testing phases of super-resolution networks. BCSCN achieves this by removing visual defects associated with the degradation kernel in LR images, such as artifacts, residual structures, and noise. Additionally, we propose a set of rewards to guide the search for basis coefficients in BCSCN, enhancing the preservation of main content while eliminating information related to degradation. The experimental results highlight the importance of BCSCN, showcasing its capacity to effectively reduce domain gaps and enhance the generalization of super-resolution networks. | BCSCN:Reducing Domain Gap through Bézier Curve basis-based Sparse Coding Network for Single-Image Super-Resolution | [
"Wenhao Guo",
"Peng Lu",
"Xujun Peng",
"Zhaoran Zhao",
"Ji Qiu",
"XiangTao Dong"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=UvmZp4m3kF | @inproceedings{
tu2024uner,
title={{UNER}: A Unified Prediction Head for Named Entity Recognition in Visually-rich Documents},
author={Yi Tu and Chong Zhang and Ya Guo and Huan Chen and Jinyang Tang and Huijia Zhu and Qi Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=UvmZp4m3kF}
} | The recognition of named entities in visually-rich documents (VrD-NER) plays a critical role in various real-world scenarios and applications.
However, the research in VrD-NER faces three major challenges: complex document layouts, incorrect reading orders, and unsuitable task formulations.
To address these challenges, we propose a query-aware entity extraction head, namely UNER, to collaborate with existing multi-modal document transformers to develop more robust VrD-NER models.
The UNER head considers the VrD-NER task as a combination of sequence labeling and reading order prediction, effectively addressing the issues of discontinuous entities in documents.
Experimental evaluations on diverse datasets demonstrate the effectiveness of UNER in improving entity extraction performance.
Moreover, the UNER head enables a supervised pre-training stage on various VrD-NER datasets to enhance the document transformer backbones and exhibits substantial knowledge transfer from the pre-training stage to the fine-tuning stage.
By incorporating universal layout understanding, a pre-trained UNER-based model demonstrates significant advantages in few-shot and cross-linguistic scenarios and exhibits zero-shot entity extraction abilities. | UNER: A Unified Prediction Head for Named Entity Recognition in Visually-rich Documents | [
"Yi Tu",
"Chong Zhang",
"Ya Guo",
"Huan Chen",
"Jinyang Tang",
"Huijia Zhu",
"Qi Zhang"
] | Conference | poster | 2408.01038 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=Uu8wjUFvkZ | @inproceedings{
tian2024foct,
title={{FOCT}: Few-shot Industrial Anomaly Detection with Foreground-aware Online Conditional Transport},
author={Long Tian and Hongyi Zhao and Ruiying Lu and Rongrong Wang and YuJie Wu and Liming Wang and Xiongpeng He and Xiyang Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Uu8wjUFvkZ}
} | Few-Shot Industrial Anomaly Detection (FS-IAD) has drawn great attention most recently since data efficiency and the ability of designing algorithms for fast migration across products become the main concerns. The difficulty of memory-based IAD in low-data regimes primarily lies in inefficient measurement between the memory bank and query images. We address such pivot issues from a new perspective of optimal matching between features of image regions. Taking the unbalanced nature of patch-wise industrial image features into consideration, we adopt Conditional Transport (CT) as a metric to compute the structural distance between representations of the memory bank and query images to determine feature relevance. The CT generates the optimal matching flows between unbalanced structural elements that achieve the minimum matching cost, which can be directly used for IAD since it well reflects the differences of query images compared with the normal memory. Realizing the fact that query images usually come one-by-one or batch-by-batch, we further propose an Online Conditional Transport (OCT) by making full use of current and historical query images for IAD via simultaneously calibrating the memory bank and matching features between the calibrated memory and the current query features. Go one step further, for sparse foreground products, we employ a predominant segment model to implement Foreground-aware OCT (FOCT) to improve the effectiveness and efficiency of OCT by forcing the model to pay more attention to diverse targets rather than redundant background when calibrating the memory bank. FOCT can improve the diversity of calibrated memory during the whole IAD process, which is critical for robust FS-IAD in practice. Besides, FOCT is flexible since it can be friendly plugged and played with any pre-trained backbones, such as WRN, and any pre-trained segment models, such as SAM. The effectiveness of our model is demonstrated across diverse datasets, including benchmarks of MVTec and MPDD, achieving SOTA performance. | FOCT: Few-shot Industrial Anomaly Detection with Foreground-aware Online Conditional Transport | [
"Long Tian",
"Hongyi Zhao",
"Ruiying Lu",
"Rongrong Wang",
"YuJie Wu",
"Liming Wang",
"Xiongpeng He",
"Xiyang Liu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=UtL2m6lmjE | @inproceedings{
ling2024federated,
title={Federated Morozov Regularization for Shortcut Learning in Privacy Preserving Learning with Watermarked Image Data},
author={Tao Ling and Siping SHI and Hao Wang and Chuang Hu and Dan Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=UtL2m6lmjE}
} | Federated learning is a promising privacy-preserving learning paradigm in which multiple clients can collaboratively learn a model with their image data kept local. For protecting data ownership, personalized watermarks are usually added to the image data by each client. However, the introduced watermarks can lead to a shortcut learning problem, where the learned model performs predictions over-rely on the simple watermark-related features and represents a low accuracy on real-world data. Existing works assume the central server can directly access the predefined shortcut features during the training process. However, these may fail in the federated learning setting as the shortcut features of the heterogeneous watermarked data are difficult to obtain.
In this paper, we propose a federated Morozov regularization technique, where the regularization parameter can be adaptively determined based on the watermark knowledge of all the clients in a privacy-preserving way, to eliminate the shortcut learning problem caused by the watermarked data. Specifically, federated Morozov regularization firstly performs lightweight local watermark mask estimation in each client to obtain the locations and intensities knowledge of local watermarks. Then, it aggregates the estimated local watermark masks to generate the global watermark knowledge with a weighted averaging. Finally, federated Morozov regularization determines the regularization parameter for each client by combining the local and global watermark knowledge. With the regularization parameter determined, the model is trained as normal federated learning. We implement and evaluate federated Morozov regularization based on a real-world deployment of federated learning on 40 Jetson devices with real-world datasets. The results show that federated Morozov regularization improves model accuracy by 11.22\% compared to existing baselines. | Federated Morozov Regularization for Shortcut Learning in Privacy Preserving Learning with Watermarked Image Data | [
"Tao Ling",
"Siping SHI",
"Hao Wang",
"Chuang Hu",
"Dan Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=UtHLZDn0bC | @inproceedings{
liu2024multimodality,
title={Multi-Modality Co-Learning for Efficient Skeleton-based Action Recognition},
author={Jinfu Liu and Chen Chen and Mengyuan Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=UtHLZDn0bC}
} | Skeleton-based action recognition has garnered significant attention due to the utilization of concise and resilient skeletons. Nevertheless, the absence of detailed body information in skeletons restricts performance, while other multimodal methods require substantial inference resources and are inefficient when using multimodal data during both training and inference stages. To address this and fully harness the complementary multimodal features, we propose a novel multi-modality co-learning (MMCL) framework by leveraging the multimodal large language models (LLMs) as auxiliary networks for efficient skeleton-based action recognition, which engages in multi-modality co-learning during the training stage and keeps efficiency by employing only concise skeletons in inference. Our MMCL framework primarily consists of two modules. First, the Feature Alignment Module (FAM) extracts rich RGB features from video frames and aligns them with global skeleton features via contrastive learning. Second, the Feature Refinement Module (FRM) uses RGB images with temporal information and text instruction to generate instructive features based on the powerful generalization of multimodal LLMs. These instructive text features will further refine the classification scores and the refined scores will enhance the model’s robustness and generalization in a manner similar to soft labels. Extensive experiments on NTU RGB+D, NTU RGB+D 120 and Northwestern-UCLA benchmarks consistently verify the effectiveness of our MMCL, which outperforms the existing skeleton-based action recognition methods. Meanwhile, experiments on UTD-MHAD and SYSU-Action datasets demonstrate the commendable generalization of our MMCL in zero-shot and domain-adaptive action recognition. Our code will be publicly available and can be found in the supplementary files. | Multi-Modality Co-Learning for Efficient Skeleton-based Action Recognition | [
"Jinfu Liu",
"Chen Chen",
"Mengyuan Liu"
] | Conference | poster | 2407.15706 | [
"https://github.com/liujf69/MMCL-Action"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=UsT1hTteBM | @inproceedings{
huang2024placiddreamer,
title={PlacidDreamer: Advancing Harmony in Text-to-3D Generation},
author={Shuo Huang and Shikun Sun and Zixuan Wang and Xiaoyu Qin and xiongyanmin and zhangyuan and Pengfei Wan and Di ZHANG and Jia Jia},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=UsT1hTteBM}
} | Recently, text-to-3D generation has attracted significant attention, resulting in notable performance enhancements.
Previous methods utilize end-to-end 3D generation models for initializing 3D Gaussians, and multi-view diffusion models to enforce multi-view consistency. Moreover, they employ text-to-image diffusion models to refine details with score distillation algorithms.
However, these methods exhibit two limitations.
Firstly, they encounter conflicts in generation directions since different models aim to produce diverse 3D assets.
Secondly, the issue of over-saturation in score distillation has not been thoroughly investigated and solved.
To address these limitations, we propose PlacidDreamer, a text-to-3D framework that harmonizes initialization, multi-view generation, and text-conditioned generation with a single multi-view diffusion model, while simultaneously employing a novel score distillation algorithm to achieve balanced saturation.
To unify the generation direction, we introduce the Latent-Plane module, a training-friendly plug-in extension that enables multi-view diffusion models to provide fast geometry reconstruction for initialization and enhanced multi-view images to personalize the text-to-image diffusion model.
To address the over-saturation problem, we propose to view score distillation as a multi-objective optimization problem and introduce the Balanced Score Distillation algorithm, which offers a Pareto Optimal solution that achieves both rich details and balanced saturation.
Extensive experiments validate the outstanding capabilities of our PlacidDreamer.
The code will be available on GitHub.
Code will be available on Github. | PlacidDreamer: Advancing Harmony in Text-to-3D Generation | [
"Shuo Huang",
"Shikun Sun",
"Zixuan Wang",
"Xiaoyu Qin",
"xiongyanmin",
"zhangyuan",
"Pengfei Wan",
"Di ZHANG",
"Jia Jia"
] | Conference | poster | 2407.13976 | [
"https://github.com/hansenhuang0823/placiddreamer"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=UrrAmUrrKE | @inproceedings{
du2024ldaaqu,
title={{LDA}-{AQU}: Adaptive Query-guided Upsampling via Local Deformable Attention},
author={Zewen Du and Zhenjiang Hu and Guiyu Zhao and Ying Jin and Hongbin Ma},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=UrrAmUrrKE}
} | Feature upsampling is an essential operation in constructing deep convolutional neural networks. However, existing upsamplers either lack specific feature guidance or necessitate the utilization of high-resolution feature maps, resulting in a loss of performance and flexibility. In this paper, we find that the local self-attention naturally has the feature guidance capability, and its computational paradigm aligns closely with the essence of feature upsampling (\ie feature reassembly of neighboring points). Therefore, we introduce local self-attention into the upsampling task and demonstrate that the majority of existing upsamplers can be regarded as special cases of upsamplers based on local self-attention. Considering the potential semantic gap between upsampled points and their neighboring points, we further introduce the deformation mechanism into the upsampler based on local self-attention, thereby proposing LDA-AQU. As a novel dynamic kernel-based upsampler, LDA-AQU utilizes the feature of queries to guide the model in adaptively adjusting the position and aggregation weight of neighboring points, thereby meeting the upsampling requirements across various complex scenarios. In addition, LDA-AQU is lightweight and can be easily integrated into various model architectures. We evaluate the effectiveness of LDA-AQU across four dense prediction tasks: object detection, instance segmentation, panoptic segmentation, and semantic segmentation. LDA-AQU consistently outperforms previous state-of-the-art upsamplers, achieving performance enhancements of 1.7 AP, 1.5 AP, 2.0 PQ, and 2.5 mIoU compared to the baseline models in the aforementioned four tasks, respectively. | LDA-AQU: Adaptive Query-guided Upsampling via Local Deformable Attention | [
"Zewen Du",
"Zhenjiang Hu",
"Guiyu Zhao",
"Ying Jin",
"Hongbin Ma"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=UkPhRZBPCg | @inproceedings{
li2024streamable,
title={Streamable Portrait Video Editing with Probabilistic Pixel Correspondence},
author={Xiaodi Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=UkPhRZBPCg}
} | Portrait video editing has attracted wide attention thanks to its practical applications. Existing methods either target fixed-length clips or perform temporally inconsistent per-frame editing. In this work, we present a brand new system, StreamEdit, which is primarily designed to edit streaming videos. Our system follows the ideology of editing propagation to ensure temporal consistency. Concretely, we choose to edit only one reference frame and warp the outcome to obtain the editing results of other frames. For this purpose, we employ a warping module, aided by a probabilistic pixel correspondence estimation network, to help establish the pixel-wise mapping between two frames. However, such a pipeline requires the reference frame to contain all contents appearing in the video, which is scarcely possible especially when there exist large motions and occlusions. To address this challenge, we propose to adaptively replace the reference frame, benefiting from a heuristic strategy referring to the overall pixel mapping uncertainty. That way, we can easily align the editing of the before- and after-replacement reference frames via image inpainting. Extensive experimental results demonstrate the effectiveness and generalizability of our approach in editing streaming portrait videos. Code will be made public. | Streamable Portrait Video Editing with Probabilistic Pixel Correspondence | [
"Xiaodi Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Ugy9DyhYYD | @inproceedings{
lu2024collaborative,
title={Collaborative Training of Tiny-Large Vision Language Models},
author={Shichen Lu and Longteng Guo and Wenxuan Wang and Zijia Zhao and Tongtian Yue and Jing Liu and Si Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Ugy9DyhYYD}
} | Recently, large vision language models (LVLMs) have advanced AI by integrating visual and linguistic data for tasks like visual conversation, image captioning, and visual question answering. Current LVLM research either scales up model size for performance or reduces parameters for limited computational resources. We believe both large and tiny models have unique strengths and that collaborative training yields better results than independent training. We propose Collaborative Training of Tiny-Large Vision Language Models (CTVLMs), a framework connecting large and tiny models via a projection layer and leveraging a synergistic training strategy. Our framework improves training efficiency by strengthening the interconnection between large and tiny models. Using the parameter efficiency of tiny models, we effectively align image-text features, then apply knowledge distillation to help large models better align cross-modal information. During fine-tuning, the large model’s extensive knowledge enhances tiny model’s performance. This collaborative approach allows models to adapt to various computational resources and outperforms existing methods in vision-language tasks. | Collaborative Training of Tiny-Large Vision Language Models | [
"Shichen Lu",
"Longteng Guo",
"Wenxuan Wang",
"Zijia Zhao",
"Tongtian Yue",
"Jing Liu",
"Si Liu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=UdAcn64u5f | @inproceedings{
wang2024ptsbench,
title={{PTSB}ench: A Comprehensive Post-Training Sparsity Benchmark Towards Algorithms and Models},
author={Zining Wang and Jinyang Guo and Ruihao Gong and Yang Yong and Aishan Liu and Yushi Huang and Jiaheng Liu and Xianglong Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=UdAcn64u5f}
} | With the increased attention to model efficiency, model sparsity technologies have developed rapidly in recent years, among which post-training sparsity (PTS) has become more and more prevalent because of its effectiveness and efficiency. However, there remain questions on better fine-grained PTS algorithms and the sparsification ability of models, which hinders the further development of this area. Therefore, a benchmark to comprehensively investigate the issues above is urgently needed. In this paper, we propose the first comprehensive post-training sparsity benchmark called PTSBench towards PTS algorithms and models. We benchmark 10+ PTS general-pluggable fine-grained algorithms on 3 typical computer vision tasks using over 40 off-the-shelf model architectures. Through extensive experiments and analyses, we obtain valuable conclusions and provide several insights from both PTS fine-grained algorithms and model aspects, which can comprehensively address the aforementioned questions. Our PTSBench can provide (1) in-depth and comprehensive evaluations for the sparsification abilities of models, (2) new observations for a better understanding of the PTS method toward algorithms and models, and (3) an upcoming well-structured and easy-integrate open-source framework for model sparsification ability evaluation. We hope this work will provide illuminating conclusions and advice for future studies of post-training sparsity methods and sparsification-friendly model design. | PTSBench: A Comprehensive Post-Training Sparsity Benchmark Towards Algorithms and Models | [
"Zining Wang",
"Jinyang Guo",
"Ruihao Gong",
"Yang Yong",
"Aishan Liu",
"Yushi Huang",
"Jiaheng Liu",
"Xianglong Liu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=UciqHfX9Ef | @inproceedings{
hai2024whats,
title={What's the Real: A Novel Design Philosophy for Robust {AI}-Synthesized Voice Detection},
author={Xuan Hai and Xin Liu and Yuan Tan and Gang Liu and Song Li and Weina Niu and Rui Zhou and Xiaokang Zhou},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=UciqHfX9Ef}
} | Voice is one of the most widely used media for information transmission in human society. While high-quality synthetic voices are extensively utilized in various applications, they pose significant risks to content security and trust building. Numerous studies have concentrated on fake voice detection to mitigate these risks, with many claiming to achieve promising performance. However, recent research has demonstrated that existing fake voice detectors suffer from serious overfitting to speaker-irrelative features (SiFs) and cannot be used in real-world scenarios. In this paper, we analyze the limitations of existing fake voice detectors and propose a new design philosophy, guiding the detection model to prioritize learning human voice features rather than the difference between the human voice and the synthetic voice. Based on this philosophy, we propose a novel fake voice detection framework named SiFSafer, which uses pre-trained speech representation models to enhance the learning of feature distribution in human voices and the adapter fine-tuning to optimize the performance. The evaluation shows that the average EERs of existing fake voice detectors in the ASVspoof challenge can exceed 20\% if the SiFs like silence segments are removed, while SiFSafer achieves an EER of less than 8\%, indicating that SiFSafer is robust to SiFs and strongly resistant to existing attacks. | What's the Real: A Novel Design Philosophy for Robust AI-Synthesized Voice Detection | [
"Xuan Hai",
"Xin Liu",
"Yuan Tan",
"Gang Liu",
"Song Li",
"Weina Niu",
"Rui Zhou",
"Xiaokang Zhou"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=UQuuFPeKb2 | @inproceedings{
chuang2024universal,
title={Universal Frequency Domain Perturbation for Single-Source Domain Generalization},
author={liu chuang and Yichao Cao and Haogang Zhu and Xiu Su},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=UQuuFPeKb2}
} | In this work, we introduce a novel approach to single-source domain generalization (SDG) in medical imaging, focusing on overcoming the challenge of style variation in out-of-distribution (OOD) domains without requiring domain labels or additional generative models. We propose a \textbf{Uni}versal \textbf{Freq}uency Perturbation framework for \textbf{SDG} termed as \textit{\textbf{UniFreqSDG}}, that performs hierarchical feature-level frequency domain perturbations, facilitating the model's ability to handle diverse OOD styles. Specifically, we design a learnable spectral perturbation module that adaptively learns the frequency distribution range of samples, allowing for precise low-frequency (LF) perturbation. This adaptive approach not only generates stylistically diverse samples but also preserves domain-invariant anatomical features without the need for manual hyperparameter tuning. Then, the frequency features before and after perturbation are decoupled and recombined through the Content Preservation Reconstruction operation, effectively preventing the loss of discriminative content information. Furthermore, we introduce the Active Domain-variance Inducement Loss to encourage effective perturbation in the frequency domain while ensuring the sufficient decoupling of domain-invariant and domain-style features. Extensive experiments demonstrate that \textit{\textbf{UniFreqSDG}} increases the dice score by an average of 7.47\% (from 77.98\% to 85.45\%) on the fundus dataset and 4.99\% (from 71.42\% to 76.73\%) on the prostate dataset compared to the state-of-the-art approaches. | Universal Frequency Domain Perturbation for Single-Source Domain Generalization | [
"liu chuang",
"Yichao Cao",
"Haogang Zhu",
"Xiu Su"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=UNv6WBb6ZT | @inproceedings{
tang2024domainconditioned,
title={Domain-Conditioned Transformer for Fully Test-time Adaptation},
author={Yushun Tang and Shuoshuo Chen and Jiyuan Jia and Yi Zhang and Zhihai He},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=UNv6WBb6ZT}
} | Fully test-time adaptation aims to adapt a network model online based on sequential analysis of input samples during the inference stage. We observe that, when applying a transformer network model into a new domain, the self-attention profiles of image samples in the target domain deviate significantly from those in the source domain, which results in large performance degradation during domain changes. To address this important issue, we propose a new structure for the self-attention modules in the transformer. Specifically, we incorporate three domain-conditioning vectors, called domain conditioners, into the query, key, and value components of the self-attention module. We learn a network to generate these three domain conditioners from the class token at each transformer network layer. We find that, during fully online test-time adaptation, these domain conditioners at each transform network layer are able to gradually remove the impact of domain shift and largely recover the original self-attention profile. Our extensive experimental results demonstrate that the proposed domain-conditioned transformer significantly improves the online fully test-time domain adaptation performance and outperforms existing state-of-the-art methods by large margins. | Domain-Conditioned Transformer for Fully Test-time Adaptation | [
"Yushun Tang",
"Shuoshuo Chen",
"Jiyuan Jia",
"Yi Zhang",
"Zhihai He"
] | Conference | poster | 2410.10442 | [
"https://github.com/yushuntang/dct"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=ULD5RCk0oo | @inproceedings{
zhou2024bsbprwkv,
title={{BSBP}-{RWKV}: Background Suppression with Boundary Preservation for Efficient Medical Image Segmentation},
author={Xudong Zhou and Tianxiang Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=ULD5RCk0oo}
} | Medical image segmentation is of great significance to disease diagnosis and treatment planning. Despite multiple progresses, most present methods (1) pay insufficient attention to suppressing background noise disturbance that impacts segmentation accuracy and (2) are not efficient enough, especially when the images are of large resolutions. To address the two challenges, we turn to a traditional de-noising method and a new efficient network structure and propose BSBP-RWKV for accurate and efficient medical image segmentation.
Specifically, we combine the advantages of Perona-Malik Diffusion (PMD) in noise suppression without losing boundary details and RWKV in its efficient structure, and devise the DWT-PMD RWKV Block across one of our encoder branches to preserve boundary details of lesion areas while suppressing background noise disturbance in an efficient structure. Then we feed the de-noised lesion boundary cues to our proposed Multi-Step Runge-Kutta convolutional Block to supplement the cues with more local details. We also propose a novel loss function for shape refinement that can align the shape of predicted lesion areas with GT masks in both spatial and frequency domains. Experiments on ISIC 2016 and Kvasir-SEG show the superior accuracy and efficiency of our BSBP-RWKV. Specifically, BSBP-RWKV reduces complexity of 5.8 times compared with the SOTA while also cutting down GPU memory usage by over 62.7% for each 1024×1024 image during inference. | BSBP-RWKV: Background Suppression with Boundary Preservation for Efficient Medical Image Segmentation | [
"Xudong Zhou",
"Tianxiang Chen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=UL56lbucD3 | @inproceedings{
bin2024gallerygpt,
title={Gallery{GPT}: Analyzing Paintings with Large Multimodal Models},
author={Yi Bin and WENHAO SHI and Yujuan Ding and Zhiqiang Hu and Zheng WANG and Yang Yang and See-Kiong Ng and Heng Tao Shen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=UL56lbucD3}
} | Artwork analysis is an important and fundamental skill for art appreciation, which could enrich personal aesthetic sensibility and facilitate the critical thinking ability. Understanding artworks is challenging due to its subjective nature, diverse interpretations, and complex visual elements, requiring expertise in art history, cultural background, and aesthetic theory. However, limited by the data collection and model ability, previous works for automatically analyzing artworks mainly focus on classification, retrieval, and other simple tasks, which is far from the goal of AI. To facilitate the research progress, in this paper, we step further to compose comprehensive analysis inspired by the remarkable perception and generation ability of large multimodal models. Specifically, we first propose a task of composing paragraph analysis for artworks, i.e., painting in this paper, only focusing on visual characteristics to formulate more comprehensive understanding of artworks. To support the research on formal analysis, we collect a large dataset PaintingForm, with about 19k painting images and 50k analysis paragraphs. We further introduce a superior large multimodal model for painting analysis composing, dubbed GalleryGPT, which is slightly modified and fine-tuned based on LLaVA architecture leveraging our collected data. We conduct formal analysis generation and zero-shot experiments across several datasets to assess the capacity of our model. The results show remarkable performance improvements comparing with powerful baseline LMMs, demonstrating its superb ability of art analysis and generalization. \textcolor{blue}{The codes and model are available at: \textit{\url{https://github.com/steven640pixel/GalleryGPT}}}. | GalleryGPT: Analyzing Paintings with Large Multimodal Models | [
"Yi Bin",
"WENHAO SHI",
"Yujuan Ding",
"Zhiqiang Hu",
"Zheng WANG",
"Yang Yang",
"See-Kiong Ng",
"Heng Tao Shen"
] | Conference | oral | 2408.00491 | [
"https://github.com/steven640pixel/gallerygpt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=UFMSbffX73 | @inproceedings{
wang2024specgaussian,
title={SpecGaussian with latent features: A high-quality modeling of the view-dependent appearance for 3D Gaussian Splatting},
author={Zhiru Wang and Shiyun Xie and Chengwei Pan and Guoping Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=UFMSbffX73}
} | Recently, the 3D Gaussian Splatting (3D-GS) method has achieved great success in novel view synthesis, providing real-time rendering while ensuring high-quality rendering results. However, this method faces challenges in modeling specular reflections and handling anisotropic appearance components, especially in dealing with view-dependent color under complex lighting conditions. Additionally, 3D-GS uses spherical harmonic to learn the color representation, which has limited ability to represent complex scenes. To overcome these challenges, we introduce Lantent-SpecGS, an approach that utilizes a universal latent neural descriptor within each 3D Gaussian. This enables a more effective representation of 3D feature fields, including appearance and geometry. Moreover, two parallel CNNs are designed to decoder the splatting feature maps into diffuse color and specular color separately. A mask that depends on the viewpoint is learned to merge these two colors, resulting in the final rendered image. Experimental results demonstrate that our method obtains competitive performance in novel view synthesis and extends the ability of 3D-GS to handle intricate scenarios with specular reflections. | SpecGaussian with latent features: A high-quality modeling of the view-dependent appearance for 3D Gaussian Splatting | [
"Zhiru Wang",
"Shiyun Xie",
"Chengwei Pan",
"Guoping Wang"
] | Conference | poster | 2409.05868 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=UBPu6deCPt | @inproceedings{
wang2024robust,
title={Robust Contrastive Cross-modal Hashing with Noisy Labels},
author={Longan Wang and Yang Qin and Yuan Sun and Dezhong Peng and Xi Peng and Peng Hu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=UBPu6deCPt}
} | Cross-modal hashing has emerged as a promising technique for retrieving relevant information across distinct media types thanks to its low storage cost and high retrieval efficiency. However, the success of most existing methods heavily relies on large-scale well-annotated datasets, which are costly and scarce in the real world due to ubiquitous labeling noise. To tackle this problem, in this paper, we propose a novel framework, termed Noise Resistance Cross-modal Hashing (NRCH), to learn hashing with noisy labels by overcoming two key challenges, i.e. noise overfitting and error accumulation. Specifically, i) to mitigate the overfitting issue caused by noisy labels, we present a novel Robust Contrastive Hashing loss (RCH) to target homologous pairs instead of noisy positive pairs, thus avoiding overemphasizing noise. In other words, RCH enforces the model focus on more reliable positives instead of unreliable ones constructed by noisy labels, thereby enhancing the robustness of the model against noise; ii) to circumvent error accumulation, a Dynamic Noise Separator (DNS) is proposed to dynamically and accurately separate the clean and noisy samples by adaptively fitting the loss distribution, thus alleviate the adverse influence of noise on iterative training. Finally, we conduct extensive experiments on four widely used benchmarks to demonstrate the robustness of our NRCH against noisy labels for cross-modal retrieval. The code is available at: https://github.com/LonganWANG-cs/NRCH.git. | Robust Contrastive Cross-modal Hashing with Noisy Labels | [
"Longan Wang",
"Yang Qin",
"Yuan Sun",
"Dezhong Peng",
"Xi Peng",
"Peng Hu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=U4aT9LYsXb | @inproceedings{
zhang2024linkthief,
title={LinkThief: Combining Generalized Structure Knowledge with Node Similarity for Link Stealing Attack against {GNN}},
author={Yuxing Zhang and Siyuan Meng and Chunchun Chen and Mengyao Peng and Hongyan Gu and Xinli Huang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=U4aT9LYsXb}
} | Graph neural networks (GNNs) have a wide range of applications in multimedia. Recent studies have shown that GNNs are vulnerable to link stealing attacks, which infers the existence of edges in the target GNN’s training graph. Existing methods are usually based on the assumption that links exist between two nodes that share similar posteriors; however, they fail to focus on links that do not hold under this assumption. To this end, we propose LinkThief, an improved link stealing attack that combines generalized structure knowledge with node similarity, in a scenario where the attackers' background knowledge contains partially leaked target graph and shadow graph.
Specifically, to equip the attack model with insights into the link structure spanning both the shadow graph and the target graph, we introduce the idea of creating a Shadow-Target Bridge Graph and extracting edge subgraph structure features from it.
Through theoretical analysis from the perspective of privacy theft, we first explore how to implement the aforementioned ideas. Building upon the findings, we design the Bridge Graph Generator to construct the Shadow-Target Bridge Graph. Then, the subgraph around the link is sampled by the Edge Subgraph Preparation Module. Finally, the Edge Structure Feature Extractor is designed to obtain generalized structure knowledge, which is combined with node similarity to form the features provided to the attack model.
Extensive experiments validate the correctness of theoretical analysis and demonstrate that LinkThief still effectively steals links without extra assumptions. | LinkThief: Combining Generalized Structure Knowledge with Node Similarity for Link Stealing Attack against GNN | [
"Yuxing Zhang",
"Siyuan Meng",
"Chunchun Chen",
"Mengyao Peng",
"Hongyan Gu",
"Xinli Huang"
] | Conference | poster | 2410.02826 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=TyqDTatA2Y | @inproceedings{
han2024prior,
title={Prior Metadata-Driven {RAW} Reconstruction: Eliminating the Need for Per-Image Metadata},
author={Wencheng Han and Chen Zhang and Yang Zhou and Wentao Liu and Chen Qian and Cheng-zhong Xu and Jianbing Shen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=TyqDTatA2Y}
} | While RAW images are efficient for image editing and perception tasks, their large size can strain camera storage and bandwidth. Techniques exist to reconstruct RAW images from sRGB data, but these methods typically require additional metadata from the RAW image, which adds to camera processing demands. To address this problem, we propose using Prior Meta as a reference to reconstruct the RAW data instead of relying on per-image metadata. Prior metadata is extracted offline from reference RAW images, which are usually part of the training dataset and have similar scenes and light conditions as the target image. With this prior metadata, the camera does not need to provide any extra processing other than the sRGB images, and our model can autonomously find the desired prior information. To achieve this, we design a three-step pipeline. First, we build a pixel searching network that can find the most similar pixels in the reference RAW images as prior information. Then, in the second step, we compress the large-scale reference images to about 0.02% of their original size to reduce the searching cost. Finally, in the last step, we develop a neural network reconstructor to reconstruct the high-fidelity RAW images. Our model achieves comparable, and even better, performance than RAW reconstruction methods based on metadata. | Prior Metadata-Driven RAW Reconstruction: Eliminating the Need for Per-Image Metadata | [
"Wencheng Han",
"Chen Zhang",
"Yang Zhou",
"Wentao Liu",
"Chen Qian",
"Cheng-zhong Xu",
"Jianbing Shen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Twe5GWM0Hl | @inproceedings{
luo2024emvcc,
title={{EMVCC}: Enhanced Multi-View Contrastive Clustering for Hyperspectral Images},
author={Fulin Luo and Yi Liu and Xiuwen Gong and Zhixiong Nan and Tan Guo},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Twe5GWM0Hl}
} | Cross-view consensus representation plays a critical role in hyperspectral images (HSIs) clustering. Recent multi-view contrastive cluster methods utilize contrastive loss to extract contextual consensus representation. However, these methods have a fatal flaw: contrastive learning may treat similar heterogeneous views as positive sample pairs and dissimilar homogeneous views as negative sample pairs. At the same time, the data representation via self-supervised contrastive loss is not specifically designed for clustering. Thus, to tackle this challenge, we propose a novel multi-view clustering method, i.e., Enhanced Multi-View Contrastive Clustering (EMVCC). First, the spatial multi-view is designed to learn the diverse features for contrastive clustering, and the globally relevant information of spectrum-view is extracted by Transformer, enhancing the spatial multi-view differences between neighboring samples. Then, a joint self-supervised loss is designed to constrain the consensus representation from different perspectives to efficiently avoid false negative pairs. Specifically, to preserve the diversity of multi-view information, the features are enhanced by using probabilistic contrastive loss, and the data is projected into a semantic representation space, ensuring that the similar samples in this space are closer in distance. Finally, we design a novel clustering loss that aligns the view feature representation with high confidence pseudo-labels for promoting the network to learn cluster-friendly features. In the training process, the joint self-supervised loss is used to optimize the cross-view features. Abundant experiment studies on numerous benchmarks verify the superiority of EMVCC in comparison to some state-of-the-art clustering methods. The codes are available at https://github.com/YiLiu1999/EMVCC. | EMVCC: Enhanced Multi-View Contrastive Clustering for Hyperspectral Images | [
"Fulin Luo",
"Yi Liu",
"Xiuwen Gong",
"Zhixiong Nan",
"Tan Guo"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=TvsocONzcC | @inproceedings{
shen2024restoring,
title={Restoring Real-World Degraded Events Improves Deblurring Quality},
author={Yeqing Shen and Shang Li and Kun Song},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=TvsocONzcC}
} | Due to its high speed and low latency, DVS is frequently employed in motion deblurring. Ideally, high-quality events would adeptly capture intricate motion information. However, real-world events are generally degraded, thereby introducing significant artifacts into the deblurred results. In response to this challenge, we model the degradation of events and propose RDNet to improve the quality of image deblurring. Specifically, we first analyze the mechanisms underlying degradation and simulate paired events based on that. These paired events are then fed into the first stage of the RDNet for training the restoration model. The events restored in this stage serve as a guide for the second-stage deblurring process. To better assess the deblurring performance of different methods on real-world degraded events, we present a new real-world dataset named DavisMCR. This dataset incorporates events with diverse degradation levels, collected by manipulating environmental brightness and target object contrast. Our experiments are conducted on synthetic datasets (GOPRO), real-world datasets (REBlur), and the proposed dataset (DavisMCR). The results demonstrate that RDNet outperforms classical event denoising methods in event restoration. Furthermore, RDNet exhibits better performance in deblurring tasks compared to state-of-the-art methods. DavisMCR are available at https://github.com/Yeeesir/DVS_RDNet. | Restoring Real-World Degraded Events Improves Deblurring Quality | [
"Yeqing Shen",
"Shang Li",
"Kun Song"
] | Conference | poster | 2407.20502 | [
"https://github.com/yeeesir/dvs_rdnet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=TuU8TQVOoj | @inproceedings{
liang2024divide,
title={Divide and Conquer: Isolating Normal-Abnormal Attributes in Knowledge Graph-Enhanced Radiology Report Generation},
author={Xiao Liang and Yanlei Zhang and Di Wang and Haodi Zhong and Ronghan Li and Quan Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=TuU8TQVOoj}
} | Radiology report generation aims to automatically generate clinical descriptions for radiology images, reducing the workload of radiologists. Compared to general image captioning tasks, the subtle differences in medical images and the specialized, complex nature of medical terminology limit the performance of data-driven radiology report generation. Previous research has attempted to leverage prior knowledge, such as organ-disease graphs, to enhance models' abilities to identify specific diseases and generate corresponding medical terminology. However, these methods cover only a limited number of disease types, focusing solely on disease terms mentioned in reports but ignoring their normal or abnormal attributes, which are critical to generating accurate reports. To address this issue, we propose a Divide-and-Conquer approach, named DCG, which separately constructs disease-free and disease-specific nodes within the knowledge graphs. Specifically, we extracted more comprehensive organ-disease entities from reports than previous methods and constructed disease-free and disease-specific nodes by rigorously distinguishing between normal conditions and specific diseases. This enables our model to consciously focus on abnormal information and mitigate the impact of excessively common diseases on report generation. Subsequently, the constructed graph is utilized to enhance the correlation between visual representations and disease terminology, thereby guiding the decoder in report generation. Extensive experiments conducted on benchmark datasets IU-Xray and MIMIC-CXR demonstrate the superiority of our proposed method. Code is available at the anonymous repository {https://anonymous.4open.science/r/DCG_Enhanced_distilGPT2-37D2}. | Divide and Conquer: Isolating Normal-Abnormal Attributes in Knowledge Graph-Enhanced Radiology Report Generation | [
"Xiao Liang",
"Yanlei Zhang",
"Di Wang",
"Haodi Zhong",
"Ronghan Li",
"Quan Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Tsz6Kra6fX | @inproceedings{
wang2024multimodal,
title={Multimodal Low-light Image Enhancement with Depth Information},
author={Zhen Wang and Dongyuan Li and Guang Li and Ziqing Zhang and Renhe Jiang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=Tsz6Kra6fX}
} | Low-light image enhancement has been researched several years. However, current image restoration methods predominantly focus on recovering images from RGB images, overlooking the potential of incorporating more modalities. With the advancements in personal handheld devices, we can now easily capture images with depth information using devices such as mobile phones. The integration of depth information into image restoration is a research question worthy of exploration. Therefore, in this paper, we propose a multimodal low-light image enhancement task based on depth information and establish a dataset named **LED** (**L**ow-light Image **E**nhanced with **D**epth Map), consisting of 1,365 samples. Each sample in our dataset includes a low-light image, a normal-light image, and the corresponding depth map. Moreover, for the LED dataset, we design a corresponding multimodal method, which can processes the input images and depth map information simultaneously to generate the predicted normal-light images. Experimental results and detailed ablation studies proves the efficiency of our method which exceeds previous single-modal state-of-the arts methods from relevant field. | Multimodal Low-light Image Enhancement with Depth Information | [
"Zhen Wang",
"Dongyuan Li",
"Guang Li",
"Ziqing Zhang",
"Renhe Jiang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
Subsets and Splits