pdf icon
Volume 16 (2020) Article 15 pp. 1-25
APPROX-RANDOM 2016 Special Issue
Online Row Sampling
Received: April 14, 2017
Revised: April 10, 2020
Published: December 11, 2020
Download article from ToC site:
[PDF (278K)] [PS (1583K)] [Source ZIP]
Keywords: spectral sparsification, leverage scores, online algorithms, matrix approximation
ACM Classification: F.2.1, F.1.2, G.2.2
AMS Classification: 68W25, 68W20, 68R10

Abstract: [Plain Text Version]

$ \newcommand{\ensuremath}[1]{#1} \newcommand{\Oh}{\ensuremath{\mathcal{O}}} \newcommand{\bv}[1]{\mathbf{#1}} \newcommand{\norm}[1]{\|#1\|} $

Finding a small spectral approximation for a tall $n \times d$ matrix $\bv A$ is a fundamental numerical primitive. For a number of reasons, one often seeks an approximation whose rows are sampled from those of $\bv{A}$. Row sampling improves interpretability, saves space when $\bv A$ is sparse, and preserves structure, which is important, e.g., when $\bv A$ represents a graph.

However, correctly sampling rows from $\bv{A}$ can be costly when the matrix is large and cannot be stored and processed in memory. Hence, a number of recent publications focus on row sampling in the streaming setting, using little more space than what is required to store the returned approximation (Kelner--Levin, Theory Comput. Sys. 2013, Kapralov et al., SIAM J. Comp. 2017).

Inspired by a growing body of work on online algorithms for machine learning and data analysis, we extend this work to a more restrictive online setting: we read rows of $\bv A$ one by one and immediately decide whether each row should be kept in the spectral approximation or discarded, without ever retracting these decisions. We present an extremely simple algorithm that approximates $\bv A$ up to multiplicative-error $1+\epsilon$ and additive-error $\delta$ using $\Oh(d \log d \log (\epsilon\norm{\bv A}_2^2/\delta) / \epsilon^2)$ online samples, with memory overhead proportional to the cost of storing the spectral approximation. We also present an algorithm that uses $\Oh(d^2)$ memory but only requires $\Oh(d \log (\epsilon\norm{\bv A}_2^2/\delta) / \epsilon^2)$ samples, which we show is optimal.

Our methods are clean and intuitive, allow for lower memory usage than prior work, and expose new theoretical properties of leverage score based matrix approximation.

------------------

A preliminary version of this paper appeared in the Proceedings of the 19th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX 2016).