Alan Jeffares
I’m a 3rd year Machine Learning PhD student at the University of Cambridge in the Department of Applied Mathematics. I am interested in building a better understanding of empirical phenomena in deep learning (e.g. double descent, optimization heuristics) and developing methodological advances from these insights (e.g. deep ensembles, mixture-of-expersts). I hold an MSc in Machine Learning from University College London and a BSc in Statistics from University College Dublin. I have previously worked as a Data Scientist at Accenture’s global center for R&D innovation and in the Insight Research Center for Data Analytics. Email at: aj659 [at] cam [dot] ac [dot] uk.
🗞️ News 🗞️
September 2024 → New paper accepted for NeurIPS2024, more details coming soon!
June 2024 → Excited to have begun my internship at Microsoft Research Redmond for the summer where I’ll be working on discrete optimization and mixture of expert models under the brilliant Lucas Liu and the deep learning team.
May 2024 → New paper accepted at ICML2024! This paper deals with the task of estimating well-calibrated prediction intervals and proposes a simple alternative to quantile regression that relaxes the implicit assumption of a symmetric noise distribution. I will also present “Looking at Deep Learning Phenomena Through a Telescoping Lens” at the HiLD workshop.
Sep 2023 → Two papers accepted for NeurIPS2023! One oral (top 0.5% of submissions) that provides an alternative take on double descent suggesting that it may not be so contradictory from classic statistical notions of model complexity. Then, a poster that investigates if deep ensembles can be trained jointly rather than independently.
Jan 2023 → Two papers accepted! 🥳 One at AISTATS23 ([paper]) and one at ICLR23 ([paper]). These papers explore self-supervised learning for conformal prediction and a new regularizer for neural networks, respectively. I look forward to presenting these with my co-authors!
April 2022 → I have officially started a PhD in Machine Learning in the University of Cambridge under the supervision of Mihaela van der Schaar!
Jan 2022 → First paper accepted! 🎉 Work done during my masters thesis under the supervision of Timos Moraitis and Pontus Stenetorp has been accepted as a spotlight (top 5% of submissions) at ICLR22. This paper took a neuroscience-inspired approach to improve the accuracy-efficiency trade-off in RNNs.
Dec 2021 → Graduated 🎓 I have officially graduated with an MSc in Machine Learning from UCL. I was also a recipient of a Dean’s list award for “outstanding academic performance”.
📚 Research 📚
Please find some of my publications below (a more up-to-date list can be found on google scholar).
“*” denotes equal contribution.
Preprints
- A. Curth, A. Jeffares, M. van der Schaar. Why do Random Forests Work? Understanding Tree Ensembles as Self-Regularizing Adaptive Smoothers. [preprint]
Conferences
- T. Pouplin*, A. Jeffares*, N. Seedat, M. van der Schaar. Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise. ICML 2024 [paper] [code]
- A. Jeffares*, A. Curth, M. van der Schaar. Looking at Deep Learning Phenomena Through a Telescoping Lens. HiLD workshop @ ICML. [paper]
- A. Curth*, A. Jeffares*, M. van der Schaar. A U-turn on Double Descent: Rethinking Parameter Counting in Statistical Learning. NeurIPS, 2023 - Oral (top 0.5%). [paper] [code]
- A. Jeffares, T. Liu, J .Crabbé, M. van der Schaar. Joint Training of Deep Ensembles Fails Due to Learner Collusion. NeurIPS, 2023 [paper] [code]
- N. Seedat*, A. Jeffares*, F. Imrie, M. van der Schaar. Improving Adaptive Conformal Prediction Using Self-Supervised Learning. AISTATS, 2023 [paper] [code]
- A. Jeffares*, T. Liu*, J. Crabbé, F. Imrie, M. van der Schaar. TANGOS: Regularizing Tabular Neural Networks through Gradient Orthogonalization and Specialization. ICLR, 2023 [paper] [code]
- A. Jeffares, Q. Guo, P. Stenetorp, T. Moraitis. Spike-inspired rank coding for fast and accurate recurrent neural networks. ICLR, 2022 - Spotlight (top 5%). [paper] [code]