skip to main content
bpames.people.ua.edu
  • Home
  • Publications
  • Seminar Talks
  • Software
  • Teaching
  • Contact

Preprints

Proximal methods for sparse optimal scoring and discriminant analysis
Authors: Summer Atkins, Gudmundur Einarsson, Brendan Ames, and Line Clemmensen.
Abstract: Linear discriminant analysis (LDA) is a classical method for dimensionality reduction, where discriminant vectors are sought to project data to a lower dimensional space for optimal separability of classes. Several recent papers have outlined strategies for exploiting sparsity for using LDA with high-dimensional data,however, many lack scalable methods for solution of the underlying optimization problems. We propose new numerical optimization schemes for solving the sparse optimal scoring formulation of LDA based on block coordinate descent, the proximal gradient method, and the alternating direction method of multipliers. We show that the per-iteration cost of these methods scales linearly in the dimension of the data provided structured regularization terms are employed, and cubically in the dimension of the data in the worst case. Furthermore, we establish that if our block coordinate descent framework generates convergent subsequences of iterates, then these subsequences converge to the stationary points of the sparse optimal scoring problem. Finally, we demonstrate the effectiveness of our new methods with empirical results for classification of time-series and Gaussian data.
  • Arxiv preprint of "Proximal methods for sparse optimal scoring and discriminant analysis".
  • Supplemental Material for "Proximal methods for sparse optimal scoring and discriminant analysis".
  • Matlab code for "Proximal methods for sparse optimal scoring and discriminant analysis".
  • R package accSDA
A sparse regression approach for evaluating and predicting NHL results
Authors: Nicholas Laffey and Brendan Ames
Abstract: We consider the use of sparse regression techniques in the development of tools for predicting and analyzing NHL team performance. In particular, we develop generalized linear models based on OLS and Poisson regression with elastic net regularization for estimating the number of regular season and playoff wins from a wide variety of regular season team statistics. Our models outperform most previous models in terms of playoff series prediction accuracy. Moreover, the use of the sparsity-inducing elastic net penalty thresholds almost all coefficients in our models to zero, providing increased interpretability; the nonzero coefficients correspond to the statistics that are influencing the models estimate of team performance. 
  • A sparse regression approach for evaluating and predicting NHL results technical report
  • R code and data for a "A sparse regression approach for evaluating and predicting NHL results

Publications

Classification of visual comprehension based on EEG data using sparse optimal scoring
Authors:
Linda Katherine Wood Ford, Joshua Borneman, Julia Krebs, Evguenia Malaia, and Brendan Ames
To appear in the Journal of Neural Engineering.
DOI:
https://doi.org/10.1088/1741-2552/abdb3b
Abstract: Understanding and differentiating brain states is an important task in the field of cognitive neuroscience with applications in health diagnostics (such as detecting neurotypical development vs. Autism Spectrum or coma/vegetative state vs. locked-in state). Electroencephalography (EEG) analysis is a particularly useful tool for this task as EEG data can detect millisecond-level changes in brain activity across a range of frequencies in a non-invasive and relatively inexpensive fashion. The goal of this study is to apply machine learning methods to EEG data in order to classify visual language comprehension across multiple participants. 26-channel EEG was recorded for 24 Deaf participants while they watched videos of sign language sentences played in time-direct and time-reverse formats to simulate interpretable vs. uninterpretable sign language, respectively. Sparse Optimal Scoring (SOS) was applied to EEG data in order to classify which type of video a participant was watching, time-direct or time-reversed. The use of SOS also served to reduce the dimensionality of the features to improve model interpretability. Main results. The analysis of frequency-domain EEG data resulted in an average out-of-sample classification accuracy of 98.89%, which was far superior to the time-domain analysis. This high classification accuracy suggests this model can accurately identify common neural responses to visual linguistic stimuli.
Convex Optimization for the Densest Subgraph and Densest Submatrix Problems
Authors: Polina Bombina and Brendan Ames.
To appear in SN Operations Research Forum.
Abstract: We consider the densest k-subgraph problem, which seeks to identify the k-node subgraph of a given input graph with maximum number of edges. This problem is well-known to be NP-hard, by reduction to the maximum clique problem. We propose a new convex relaxation for the densest k-subgraph problem, based on a nuclear norm relaxation of a low-rank plus sparse decomposition of the adjacency matrices of k-node subgraphs to partially address this intractability. We establish that the densest k-subgraph can be recovered with high probability from the optimal solution of this convex relaxation if the input graph is randomly sampled from a distribution of random graphs constructed to contain an especially dense k-node subgraph with high probability. Specifically, the relaxation is exact when the edges of the input graph are added independently at random, with edges within a particular k-node subgraph added with higher probability than other edges in the graph. We provide a sufficient condition on the size of this subgraph k and the expected density under which the optimal solution of the proposed relaxation recovers this k-node subgraph with high probability. Further, we propose a first-order method for solving this relaxation based on the alternating direction method of multipliers, and empirically confirm our predicted recovery thresholds using simulations involving randomly generated graphs, as well as graphs drawn from social and collaborative networks.
  • Arxiv preprint of "Convex optimization for the densest subgraph and densest submatrix problems".
  • Matlab code for "Convex optimization for the densest subgraph and submatrix problems".
  • R package admmDensestSubmatrix
Exact Clustering of Weighted Graphs via Semidefinite Programming
Authors:  Aleksis Pirinen and Brendan Ames
Manuscript: Journal of Machine Learning Research 20 (2019) 1-3
Summary: As a model problem for clustering, we consider the densest k-disjoint-clique problem of partitioning a weighted complete graph into k disjoint subgraphs such that the sum of the densities of these subgraphs is maximized. We establish that such subgraphs can be recovered from the solution of a particular semidefinite relaxation with high probability if the input graph is sampled from a distribution of clusterable graphs. Specifically, the semidefinite relaxation is exact if the graph consists of k large disjoint subgraphs, corresponding to clusters, with weight concentrated within these subgraphs, plus a moderate number of outliers. Further, we establish that if noise is weakly obscuring these clusters, i.e, the between-cluster edges are assigned very small weights, then we can recover significantly smaller clusters. For example, we show that in approximately sparse graphs, where the between-cluster weights tend to zero as the size n of the graph tends to infinity, we can recover clusters of size polylogarithmic in n. Empirical evidence from numerical simulations is also provided to support these theoretical phase transitions to perfect recovery of the cluster structure.
  • Arxiv preprint for "Exact Clustering of Weighted Graphs via Semidefinite Programming".
  • Matlab code for "Exact Clustering of Weighted Graphs via Semidefinite Programming".
Alternating direction method of multipliers for sparse zero-variance discriminant analysis and principal component analysis
Authors: Brendan Ames and Mingyi Hong.
Manuscript: Compututational Optimization and Applications (2016) 64: 725.
DOI: 0.1007/s10589-016-9828-y
Summary:  We propose sparse zero-variance discriminant analysis (SZVD) as a method for simultaneously performing linear discriminant analysis and feature selection for data in the high dimension, small sample size setting. This method combines classical zero-variance discriminant analysis, where discriminant vectors are identified in the null space of the sample within-class covariance matrix, with penalization applied to the discriminant vectors to induce sparse solutions. We propose a simple algorithm based on the alternating direction method of multipliers for approximately solving the resulting nonconvex optimization problem and show that this algorithm converges to a stationary point of our problem. Finally, we provide simulation results supporting the use of our classification heuristic on simulated data and data drawn from applications in time-series classification.
  • Arxiv preprint of "ADMM for Sparse ZVDA and PCA".
  • Optimization Online preprint of "ADMM for Sparse ZVDA and PCA".
  • Matlab code for "ADMM for Sparse ZVDA and PCA".
  • R code for "ADMM for Sparse ZVDA and PCA"
Derivatives of compound matrix valued functions
Authors: Brendan Ames and Hristo Sendov.
Manuscript: Journal of Mathematical Analysis and Applications Volume 433, Issue 2, 15 January 2016, p 1459-1485.
DOI: 10.1016/j.jmaa.2015.08.029
Abstract: Primary matrix functions and spectral functions are two classes of orthogonally invariant functions on a symmetric matrix argument. Many of their properties have been investigated thoroughly and find numerous applications both theoretical and applied in areas ranging from engineering, image processing, optimization and physics. We propose a family of maps that provide a natural connection and generalization of these two classes of functions. The family of maps also contains the well-known multiplicative and additive compound matrices. We explain when each member of this family is a differentiable function and exhibit a formula for its derivative.
Guaranteed Recovery of Planted Cliques and Dense Subgraphs by Convex Relaxation
Author: Brendan Ames.
Manuscript: Journal of Optimization Theory and Applications (2015) 167: 653.
DOI: 10.1007/s10957-015-0777-x
Summary: This paper provides a generalization of earlier results for the maximum clique and biclique problems. We consider a planted case analysis for new convex relaxations for the maximum clique and densest k-subgraph problems. In particular, we establish that a planted clique of size k = Omega(sqrt{N}) can be recovered in a graph on N nodes under noise of the form of random edge additions and deletions. This lower bound matches the current state of the art established several times elsewhere. Analogous results for finding a planted biclique in a bipartite graph are also provided. Numerical simulations are supplied to support our theoretical results.
  • Supplemental material for "Guaranteed Recovery of Planted Cliques and Dense Subgraphs by Convex Relaxation".
  • Arxiv preprint of "Guaranteed Recovery of Planted Cliques and Dense Subgraphs by Convex Relaxation".
  • Optimization Online preprint of "Guaranteed Recovery of Planted Cliques and Dense Subgraphs by Convex Relaxation".
  • Matlab code for "Guaranteed Recovery of Planted Cliques and Dense Subgraphs by Convex Relaxation".
A Leapfrog Strategy for Pursuit-Evasion in a Polygonal Environment
Authors: Brendan Ames, Andrew Beveridge, Rosalie Carlson, Claire Djang, Volkan Isler, Stephen Ragain, Maxray Savage. 
Manuscript: International Journal of Computational Geometry and Applications 25, 77 (2015).
DOI: 10.1142/S0218195915500065

Summary: We study pursuit-evasion in a polygonal environment with polygonal obstacles. It is known that one pursuer can capture the evader in any simply-connected polygonal environment, and that three pursuers are always sufficient in any polygonal environment (possibly with polygonal obstacles). We contribute two new results to this field. First, we fully characterize when an environment with a single obstacles is one-pursuer-win or two-pursuer-win. Second, we give sufficient (but not necessary) conditions for an environment to have a winning strategy for two pursuers. Such environments can be swept by a leapfrog strategy in which the two cops alternately guard/increase the currently controlled area. More concretely, for an environment with n vertices, we describe an O(n^2) algorithm that (1) determines whether the obstacles are well-separated, and if so, (2) constructs the required partition for a leapfrog strategy.
  • Results of 2012 MAXIMA Interdisciplinary REU.
  • Arxiv preprint of "A Leapfrog Strategy for Pursuit-Evasion in a Polygonal Environment".
Solving ptychography with a convex relaxation
Roarke Horstmeyer, Richard Y. Chen, Xiaoze Ou, Brendan Ames, Joel A. Tropp, and Changuei Yang.
Manuscript: New Journal of Physics, Vol 17, May 2015.
DOI: 10.1088/1367-2630/17/5/053044
Summary: Ptychography is a powerful computational imaging technique that transforms a collection of low-resolution images into a high-resolution sample reconstruction. Unfortunately, algorithms that are currently used to solve this reconstruction problem lack stability, robustness, and theoretical guarantees. Recently, convex optimization algorithms have improved the accuracy and reliability of several related reconstruction efforts. This paper proposes a convex formulation of the ptychography problem. This formulation has no local minima, it can be solved using a wide range of algorithms, it can incorporate appropriate noise models, and it can include multiple a priori constraints. The paper considers a specific algorithm, based on low-rank factorization, whose runtime and memory usage are near-linear in the size of the output image. Experiments demonstrate that this approach offers a 25% lower background variance on average than alternating projections, the current standard algorithm for ptychographic reconstruction.
  • Arxiv preprint of "Solving ptychography with a convex relaxation".
Guaranteed clustering and biclustering via semidefinite programming
Author: Brendan Ames
Manuscript: Mathematical Programming. 147(1-2): 429-465, 2014.
DOI: 10.1007/s10107-013-0729-x
Summary: We propose new semidefinite relaxations for the weighted graph clustering and biclustering problems. Specifically, we consider the problem of clustering the nodes of a given weighted complete graph such that the sum of the average edge-weights of the corresponding subgraphs is maximized. We provide a planted case analysis of this problem; we show that if the input graph contains sufficiently large disjoint clusters, then these clusters can be recovered from the optimal solution of our relaxation. These results generalize those for the planted k-disjoint-clique problem and the stochastic block model to weighted graphs.
  • Arxiv preprint of "Guaranteed clustering and biclustering via semidefinite programming".
  • Matlab code for "Guaranteed clustering and biclustering via semidefinite programming".
Convex optimization for the planted k-disjoint-clique problem
Authors: Brendan Ames and Stephen Vavasis. 
Manuscript: Mathematical Programming. 143(1-2): 299-337, 2014.
DOI: 10.1007/s10107-013-0733-1
Summary: We consider a semidefinite relaxation for the planted k-disjoint-clique problem. Given an input graph, the maximum k-disjoint-clique problem seeks the subgraph comprising k disjoint complete graphs of maximum size. We establish that our relaxation is exact for certain classes of input graphs. In particular, if the input graph consists of  sufficiently large disjoint cliques plus noise in the form of diversionary nodes and edges, then the node sets of these planted cliques can be recovered from the optimal solution of our relaxation.
  • Arxiv preprint of "Convex optimization for the planted k-disjoint-clique problem".
  • Matlab code for "Convex optimization for the planted k-disjoint-clique problem".
A new derivation of a formula by Kato
Authors: Brendan Ames and Hristo Sendov.
Manuscript: Linear Algebra and Its Applications. 436(3): 722-730, 2012.
DOI: 10.1016/j.laa.2011.07.034
Summary: We provide a new derivation of Kato's classical formulae for the coefficients in the power series expansion of a simple eigenvalue of a real symmetric matrix A in direction E. Kato's classical derivation of this formula uses a complex-analytic approach involving properties of the resolvent of A+tE. Our derivation uses only simple real-analytic and combinatorial arguments. In particular, we derive and utilize a formula for the derivative of the Moore-Penrose generalized inverse of the map lambda_m(A) - A in direction E.
Nuclear norm minimization for the planted clique and biclique problems
Authors: Brendan Ames and Stephen Vavasis.
Manuscript: Mathematical Programming. 129(1): 69-89, 2011.
DOI:
10.1007/s10107-011-0459-x
Summary: We consider the problems of finding a maximum clique in a graph and finding a maximum-edge biclique in a bipartite graph. Both problems are NP-hard. We write both problems as matrix-rank minimization and then relax them using the nuclear norm. This technique, which may be regarded as a generalization of compressive sensing, has recently been shown to be an effective way to solve rank optimization problems. In the special cases that the input graph has a planted clique or biclique (i.e., a single large clique or biclique plus diversionary edges), our algorithm successfully provides an exact solution to the original instance. For each problem, we provide two analyses of when our algorithm succeeds. In the first analysis, the diversionary edges are placed by an adversary. In the second, they are placed at random. In the case of random edges for the planted clique problem, we obtain the same bound as Alon, Krivelevich and Sudakov as well as Feige and Krauthgamer, but we use different techniques.
  • Arxiv preprint of "Nuclear norm minimization for the planted clique and biclique problems".
Asymptotic expansions of the ordered spectrum of symmetric operators
Authors: Brendan Ames and Hristo Sendov. 
Manuscript: Nonlinear Analysis, Series A: Theory, Methods and Applications. 72(11): 4288-4297, 2010.
DOI:
10.1016/j.na.2010.02.005
Summary: In this work, we build on ideas of M. Torki and show that if the map A(t) has a one-sided asymptotic expansion at 0^+ of order K then so does the m-th largest eigenvalue of A(t), for any m. Moreover, we derive formulas for computing the coefficients in this asymptotic expansion.

Theses and Dissertations

Ph.D Dissertation (2011)

Convex relaxation for the planted clique, biclique, and clustering problems
Completed under the supervision of Dr. Stephen Vavasis, Department of Combinatorics and Optimization, University of Waterloo, 2011.
  • Manuscript of "Convex relaxation for the planted clique, biclique, and clustering problems". 
  • Matlab code for "Convex relaxation for the planted clique, biclique, and clustering problems".

M.Sc. Thesis (2007)

Taylor expansions of the spectrum of a symmetric matrix. 
Completed under the supervision of Dr. Hristo Sendov, Department of Mathematics and Statistics, University of Guelph, 2007.
  • Manuscript of "Taylor expansions of the spectrum of a symmetric matrix". 
Picture
Accessibility | Equal Opportunity | UA Disclaimer | Site Disclaimer | Privacy
Copyright © 2020 | The University of Alabama | Tuscaloosa, AL 35487 | (205) 348-6010
Website provided by the Center for Instructional Technology, Office of Information Technology
  • Home
  • Publications
  • Seminar Talks
  • Software
  • Teaching
  • Contact