Andres Potapczynski

  • PhD in Data Science, NYU
  • MSc in Data Science, Columbia University
  • BSc in Applied Mathematics, ITAM
  • BA in Economics, ITAM

Research & Projects

A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks
We propose Neural-IVP, a method for approximating solutions to high-dimensional PDEs though neural networks. Our method is scalable, well-conditioned and runs in time linear to the number of parameters in the neural network.
Topics: Inductive Biases, Partial Differential Equations, Numerical Linear Algebra.

PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization
We develop a compression approach based on quantizing neural network parameters in a random linear subspace profoundly improving previous state-of-the-art generalization bounds and showing how these tight bounds can help us understand the role of model size, equivariance, and implicit biases in optimization. -- 36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Topics: Random Subspaces, Quantization, Equivariance, PAC-Bayes bounds.

Low-Precision Arithmetic for Fast Gaussian Processes
We study the different failure modes that can occur when training GPs in half precision. To circumvent these failure modes, we propose a multi-faceted approach involving conjugate gradients with re-orthogonalization, mixed precision, and preconditioning -- 38th Conference on Uncertainty in Artificial Intelligence (UAI 2022).
Topics: Gaussian Processes, Quantization, Numerical Linear Algebra.

Bias-Free Scalable Gaussian Processes via Randomized Truncations
We identify the biases introduced by approximate methods and eliminate them via randomized truncation estimators -- 38th International Conference on Machine Learning (ICML 2021).
Topics: Gaussian Processes, Russian-Roulette estimators, Kernel Approximations, Numerical Linear Algebra.

Invertible Gaussian Reparameterization: Revisiting the Gumbel-Softmax
We introduce a family of continuous relaxations that is more flexible, extensible and better performing than the Gumbel-Softmax -- 34th Conference on Neural Information Processing Systems (NeurIPS 2020).
Topics: Generative modeling, VAEs, Normalizing Flows, Continuous Relaxations.

Nowcasting with Google Trends
I propose an alternative kernel bandwidth selection algorithm and exhibits what Google searches are relevant for predicting unemployment, influenza outbreaks and violence spikes in Mexico. The content is in English (past the acknowledgments) and relevant pages are: 4, 26, 36, 43, 48 -- Undergraduate Thesis.

Classifying webpages based on their menu
By modifying Word2Vec we recover an embedding that helps to cluster clients based on their webpage's menu content -- Capstone Project.