Chrome Extension
WeChat Mini Program
Use on ChatGLM

A Structural Consensus Representation Learning Framework for Multi-View Clustering

KNOWLEDGE-BASED SYSTEMS(2024)

Cited 0|Views44
Abstract
Learning a structural consensus representation is crucial for various multi-view tasks, such as multi-view clustering, multi-view classification, etc. However, this task is challenging due to the inconsistent structure problem caused by the view-level bias of multi-view data. To address this issue, we propose a structural consensus representation learning (SCRL) framework, which contains two cascading representation training processes, to learn and refine structural consensus representation. A Consensual Joint Multi-AutoEncoder is developed which estimates a consensual structure shared among all views and learns each view representation guided by the consensual structure in a unique process. By applying EM (Expectation–Maximization)-style optimization, the view representations and the consensual structure are optimized iteratively. Then, we devise a Hybrid Contrastive Refining Net, which contains two contrastive refining components to fine-tune the learnt representations by further eliminating inconsistent view representations within the same view and from different views. The proposed SCRL framework is able to debias the learning of view representations and provide structural consensus representations for multi-view clustering. Extensive experiments and analysis on several real datasets show the effectiveness of our proposed SCRL framework.
More
Translated text
Key words
Multi-view data,Structural consensus representation,Consensual joint learning,Hybrid contrastive refining
求助PDF
上传PDF
Bibtex
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出SD-Eval,一个针对spoken dialogue理解与生成的多维度评估基准数据集,强调对副语言和环境信息的考量,通过实验验证了加入这些信息的模型性能提升。

方法】:通过整合八个公共数据集,构建了一个包含7,303个发言、8.76小时语音数据的SD-Eval数据集,涵盖了情感、口音、年龄和背景音四种维度。

实验】:使用三个不同的模型对SD-Eval进行评估,构建的训练集包含1,052.72小时语音数据和724.4k发言,通过客观指标(如BLEU和ROUGE)、主观评价及基于LLM的指标进行综合评价,实验结果显示加入副语言和环境信息的模型在客观和主观测量上均优于对照组,且LLM-based指标与人类评价的相关性更高。数据集开源地址为https://github.com/amphionspace/SD-Eval。