Publications

Results 1–50 of 199
Skip to search filters

Stochastic Gradients for Large-Scale Tensor Decomposition

SIAM Journal on Mathematics of Data Science

Kolda, Tamara G.; Hong, David H.

Tensor decomposition is a well-known tool for multiway data analysis. This work proposes using stochastic gradients for efficient generalized canonical polyadic (GCP) tensor decomposition of large-scale tensors. GCP tensor decomposition is a recently proposed version of tensor decomposition that allows for a variety of loss functions such as Bernoulli loss for binary data or Huber loss for robust estimation. Here, the stochastic gradient is formed from randomly sampled elements of the tensor and is efficient because it can be computed using the sparse matricized-tensor times Khatri--Rao product tensor kernel. For dense tensors, we simply use uniform sampling. For sparse tensors, we propose two types of stratified sampling that give precedence to sampling nonzeros. Numerical results demonstrate the advantages of the proposed approach and its scalability to large-scale problems.

More Details

Faster Johnson–Lindenstrauss transforms via Kronecker products

Information and inference (Online)

Jin, Ruhui J.; Ward, Rachel W.; Kolda, Tamara G.

The Kronecker product is an important matrix operation with a wide range of applications in signal processing, graph theory, quantum computing and deep learning. In this work, we introduce a generalization of the fast Johnson–Lindenstrauss projection for embedding vectors with Kronecker product structure, the Kronecker fast Johnson–Lindenstrauss transform (KFJLT). The KFJLT reduces the embedding cost by an exponential factor of the standard fast Johnson–Lindenstrauss transform’s cost when applied to vectors with Kronecker structure, by avoiding explicitly forming the full Kronecker products. Here, we prove that this computational gain comes with only a small price in embedding power: consider a finite set of $p$ points in a tensor product of $d$ constituent Euclidean spaces $\bigotimes _{k=d}^{1}{\mathbb{R}}^{n_k}$, and let $N = \prod _{k=1}^{d}n_k$. With high probability, a random KFJLT matrix of dimension $m \times N$ embeds the set of points up to multiplicative distortion $(1\pm \varepsilon )$ provided $m \gtrsim \varepsilon ^{-2} \, \log ^{2d - 1} (p) \, \log N$. We conclude by describing a direct application of the KFJLT to the efficient solution of large-scale Kronecker-structured least squares problems for fitting the CP tensor decomposition.

More Details

Mathematics: The Tao of Data Science

Harvard Data Science Review

Kolda, Tamara G.

The two pieces, "Ten Research Challenge Areas in Data Science" by Jeannette M. Wing and “Challenges and Opportunities in Statistics and Data Science: Ten Research Areas” by Xuming He and Xihong Lin, provide an impressively complete list of data science challenges from luminaries in the field of data science. They have done an extraordinary job, so this response offers a complementary viewpoint from a mathematical perspective and evangelizes advanced mathematics as a key tool for meeting the challenges they have laid out. Notably, we pick up the themes of scientific understanding of machine learning and deep learning, computational considerations such as cloud computing and scalability, balancing computational and statistical considerations, and inference with limited data. We propose that mathematics is an important key to establishing rigor in the field of data science and as such has an essential role to play in its future.

More Details

TuckerMPI: A Parallel C++/MPI Software Package for Large-scale Data Compression via the Tucker Tensor Decomposition

ACM Transactions on Mathematical Software

Ballard, Grey; Klinvex, Alicia; Kolda, Tamara G.

Our goal is compression of massive-scale grid-structured data, such as the multi-terabyte output of a high-fidelity computational simulation. For such data sets, we have developed a new software package called TuckerMPI, a parallel C++/MPI software package for compressing distributed data. The approach is based on treating the data as a tensor, i.e., a multidimensional array, and computing its truncated Tucker decomposition, a higher-order analogue to the truncated singular value decomposition of a matrix. The result is a low-rank approximation of the original tensor-structured data. Compression efficiency is achieved by detecting latent global structure within the data, which we contrast to most compression methods that are focused on local structure. In this work, we describe TuckerMPI, our implementation of the truncated Tucker decomposition, including details of the data distribution and in-memory layouts, the parallel and serial implementations of the key kernels, and analysis of the storage, communication, and computational costs. We test the software on 4.5 and 6.7 terabyte data sets distributed across 100 s of nodes (1,000 s of MPI processes), achieving compression ratios between 100 and 200,000×, which equates to 99-99.999% compression (depending on the desired accuracy) in substantially less time than it would take to even read the same dataset from a parallel file system. Moreover, we show that our method also allows for reconstruction of partial or down-sampled data on a single node, without a parallel computer so long as the reconstructed portion is small enough to fit on a single machine, e.g., in the instance of reconstructing/visualizing a single down-sampled time step or computing summary statistics. The code is available at https://gitlab.com/tensors/TuckerMPI.

More Details

Generalized Canonical Polyadic Tensor Decomposition

SIAM Review

Hong, David; Kolda, Tamara G.; Duersch, Jed A.

Tensor decomposition is a fundamental unsupervised machine learning method in data science, with applications including network analysis and sensor data processing. This work develops a generalized canonical polyadic (GCP) low-rank tensor decomposition that allows other loss functions besides squared error. For instance, we can use logistic loss or Kullback{Leibler divergence, enabling tensor decomposition for binary or count data. We present a variety of statistically motivated loss functions for various scenarios. We provide a generalized framework for computing gradients and handling missing data that enables the use of standard optimization methods for fitting the model. We demonstrate the exibility of the GCP decomposition on several real-world examples including interactions in a social network, neural activity in a mouse, and monthly rainfall measurements in India.

More Details

Estimating higher-order moments using symmetric tensor decomposition

SIAM Journal on Matrix Analysis and Applications

Sherman, Samantha; Kolda, Tamara G.

We consider the problem of decomposing higher-order moment tensors, i.e., the sum of symmetric outer products of data vectors. Such a decomposition can be used to estimate the means in a Gaussian mixture model and for other applications in machine learning. The dth-order empirical moment tensor of a set of p observations of n variables is a symmetric d-way tensor. Our goal is to find a low-rank tensor approximation comprising r < p symmetric outer products. The challenge is that forming the empirical moment tensors costs O(pnd) operations and O(nd) storage, which may be prohibitively expensive; additionally, the algorithm to compute the low-rank approximation costs O(nd) per iteration. Our contribution is avoiding formation of the moment tensor, computing the low-rank tensor approximation of the moment tensor implicitly using O(pnr) operations per iteration and no extra memory. This advance opens the door to more applications of higher-order moments since they can now be efficiently computed. We present numerical evidence of the computational savings and show an example of estimating the means for higher-order moments.

More Details

Final Report: Parallel Tensor Decompositions for Massive Heterogeneous Incomplete Data (LDRD Project 199986)

Kolda, Tamara G.

We extended the fundamental capabilities of tensor decomposition to a broader range of problems - handling new data types and larger problems. This has implications for data analysis across a range of applications in sensor monitoring, cybersecurity, treaty verification, signal processing, etc. Identifies latent structure within data, enabling anomaly detection, process monitoring, scientific discovery.

More Details

Software for sparse tensor decomposition on emerging computing architectures

SIAM Journal on Scientific Computing

Phipps, Eric T.; Kolda, Tamara G.

In this paper, we develop software for decomposing sparse tensors that is portable to and performant on a variety of multicore, manycore, and GPU computing architectures. The result is a single code whose performance matches optimized architecture-specific implementations. The key to a portable approach is to determine multiple levels of parallelism that can be mapped in different ways to different architectures, and we explain how to do this for the matricized tensor times Khatri-Rao product (MTTKRP), which is the key kernel in canonical polyadic tensor decomposition. Our implementation leverages the Kokkos framework, which enables a single code to achieve high performance across multiple architectures that differ in how they approach fine-grained parallelism. We also introduce a new construct for portable thread-local arrays, which we call compile-time polymorphic arrays. Not only are the specifics of our approaches and implementation interesting for tuning tensor computations, but they also provide a roadmap for developing other portable high-performance codes. As a last step in optimizing performance, we modify the MTTKRP algorithm itself to do a permuted traversal of tensor nonzeros to reduce atomic-write contention. We test the performance of our implementation on 16- and 68-core Intel CPUs and the K80 and P100 NVIDIA GPUs, showing that we are competitive with state-of-the-art architecture-specific codes while having the advantage of being able to run on a variety of architectures.

More Details

An improved hyperbolic embedding algorithm

Journal of Complex Networks

Chowdhary, Kenny; Kolda, Tamara G.

Because hyperbolic space has properties that make it amenable to graph representations, there is significant interest in scalable hyperbolic-space embedding methods. These embeddings enable constant-time approximation of shortest-path distances, and so are significantly more efficient than full shortest-path computations. In this article, we improve on existing landmark-based hyperbolic embedding algorithms for large-scale graphs. Whereas previous methods compute the embedding by using the derivative-free Nelder- Mead simplex optimization method, our approach uses the limited-memoryBFGS(LBFGS) method, which is quasi-Newton optimization, with analytic gradients. Our method is not only significantly faster but also produces higher-quality embeddings. Moreover, we are able to include the hyperbolic curvature as a variable in the optimization. We compare our hyperbolic embedding method implementation in Python (called Hypy) against the best publicly available software, Rigel. Our method is an order of magnitude faster and shows significant improvements in the accuracy of the shortest-path distance calculations. Tests are performed on a variety of real-world networks, and we show the scalability of our method by embedding a graph with 1.8 billion edges and 65 million nodes.

More Details

Unsupervised Discovery of Demixed, Low-Dimensional Neural Dynamics across Multiple Timescales through Tensor Component Analysis

Neuron

Williams, Alex H.; Kim, Tony H.; Wang, Forea; Vyas, Saurabh; Ryu, Stephen I.; Shenoy, Krishna V.; Schnitzer, Mark; Kolda, Tamara G.; Ganguli, Surya

Perceptions, thoughts, and actions unfold over millisecond timescales, while learned behaviors can require many days to mature. While recent experimental advances enable large-scale and long-term neural recordings with high temporal fidelity, it remains a formidable challenge to extract unbiased and interpretable descriptions of how rapid single-trial circuit dynamics change slowly over many trials to mediate learning. We demonstrate a simple tensor component analysis (TCA) can meet this challenge by extracting three interconnected, low-dimensional descriptions of neural data: neuron factors, reflecting cell assemblies; temporal factors, reflecting rapid circuit dynamics mediating perceptions, thoughts, and actions within each trial; and trial factors, describing both long-term learning and trial-to-trial changes in cognitive state. We demonstrate the broad applicability of TCA by revealing insights into diverse datasets derived from artificial neural networks, large-scale calcium imaging of rodent prefrontal cortex during maze navigation, and multielectrode recordings of macaque motor cortex during brain machine interface learning. Williams et al. describe an unsupervised method to uncover simple structure in large-scale recordings by extracting distinct cell assemblies with rapid within-trial dynamics, reflecting interpretable aspects of perceptions, actions, and thoughts, and slower across-trial dynamics reflecting learning and internal state changes.

More Details

Unsupervised Learning Through Randomized Algorithms for High-Volume High-Velocity Data (ULTRA-HV)

Pinar, Ali P.; Kolda, Tamara G.; Carlberg, Kevin T.; Ballard, Grey B.; Mahoney, Michael M.

Through long-term investments in computing, algorithms, facilities, and instrumentation, DOE is an established leader in massive-scale, high-fidelity simulations, as well as science-leading experimentation. In both cases, DOE is generating more data than it can analyze and the problem is intensifying quickly. The need for advanced algorithms that can automatically convert the abundance of data into a wealth of useful information by discovering hidden structures is well recognized. Such efforts however, are hindered by the massive volume of the data and its high velocity. Here, the challenge is developing unsupervised learning methods to discover hidden structure in high-volume, high-velocity data.

More Details
Results 1–50 of 199
Results 1–50 of 199