Mathematics > Statistics Theory
[Submitted on 29 May 2014 (v1), last revised 15 May 2016 (this version, v3)]
Title:Optimal Shrinkage of Singular Values
View PDFAbstract:We consider recovery of low-rank matrices from noisy data by shrinkage of singular values, in which a single, univariate nonlinearity is applied to each of the empirical singular values. We adopt an asymptotic framework, in which the matrix size is much larger than the rank of the signal matrix to be recovered, and the signal-to-noise ratio of the low-rank piece stays constant. For a variety of loss functions, including Mean Square Error (MSE - square Frobenius norm), the nuclear norm loss and the operator norm loss, we show that in this framework there is a well-defined asymptotic loss that we evaluate precisely in each case. In fact, each of the loss functions we study admits a unique admissible shrinkage nonlinearity dominating all other nonlinearities. We provide a general method for evaluating these optimal nonlinearities, and demonstrate our framework by working out simple, explicit formulas for the optimal nonlinearities in the Frobenius, nuclear and operator norm cases. For example, for a square low-rank n-by-n matrix observed in white noise with level $\sigma$, the optimal nonlinearity for MSE loss simply shrinks each data singular value $y$ to $\sqrt{y^2-4n\sigma^2 }$ (or to 0 if $y<2\sqrt{n}\sigma$). This optimal nonlinearity guarantees an asymptotic MSE of $2nr\sigma^2$, which compares favorably with optimally tuned hard thresholding and optimally tuned soft thresholding, providing guarantees of $3nr\sigma^2$ and $6nr\sigma^2$, respectively. Our general method also allows one to evaluate optimal shrinkers numerically to arbitrary precision. As an example, we compute optimal shrinkers for the Schatten-p norm loss, for any p>0.
Submission history
From: Matan Gavish [view email][v1] Thu, 29 May 2014 10:10:15 UTC (117 KB)
[v2] Sat, 31 May 2014 01:11:53 UTC (116 KB)
[v3] Sun, 15 May 2016 20:33:33 UTC (677 KB)
Current browse context:
math.ST
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.