Chrome Extension
WeChat Mini Program
Use on ChatGLM

A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Completion

SIAM Journal on Imaging Sciences(2013)CCF BSCI 3区SCI 2区

Rice Univ

Cited 1249|Views92
Abstract
This paper considers regularized block multiconvex optimization, where the feasible set and objective functionare generally nonconvex but convex in each block of variables. It also accepts nonconvex blocks and requiresthese blocks to be updated by proximal minimization.We review some interesting applications and propose a generalized block coordinate descent method.Under certain conditions, we show that any limit point satisfies the Nash equilibrium conditions. Furthermore,we establish global convergence and estimate the asymptotic convergence rateof the method by assuming a property based on the Kurdyka--Łojasiewicz inequality.The proposed algorithms are tested on nonnegative matrix and tensor factorization, as well as matrix and tensorrecovery from incomplete observations.The tests include synthetic data and hyperspectral data, as well as image sets from the CBCL and ORL databases.Compared to the existing state-of-the-art algorithms,the proposed algorithms demonstrate superior performance in both speed and solution quality. The MATLAB code ofnonnegative matrix/tensor decomposition and completion, along with a few demos, are accessible from the authors'homepages.
More
Translated text
Key words
block multiconvex,block coordinate descent,Kurdyka-Lojasiewicz inequality,Nash equilibrium,nonnegative matrix and tensor factorization,matrix completion,tensor completion,proximal gradient method
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Related Papers

Matrix Recovery from Quantized and Corrupted Measurements

2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2014

被引用47

A Compact Binary Aggregated Descriptor Via Dual Selection for Visual Search

Proceedings of the 24th ACM international conference on Multimedia 2016

被引用2

Structurally Regularized Non-negative Tensor Factorization for Spatio-Temporal Pattern Discoveries

Machine Learning and Knowledge Discovery in Databases Lecture Notes in Computer Science 2017

被引用12

Markov Chain Block Coordinate Descent.

Computational Optimization and Applications 2019

被引用13

Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了一种广义块坐标下降法,用于解决带惩罚的块多凸优化问题,并将其应用于非负张量分解与补全,创新之处在于证明了算法的全局收敛性并估计了渐进收敛速率。

方法】:提出了基于Kurdyka-Lojasiewicz不等式的块坐标下降法。

实验】:实验包括在合成数据和光谱数据上进行非负矩阵和张量分解,以及在CBCL和ORL数据库的图像集上进行矩阵和张量从不完整观测中恢复。与现有最先进算法相比,所提出的算法在速度和解决方案质量上表现出色。