Alan Jeffares
I’m a 2nd year Machine Learning PhD student in the University of Cambridge at the Department of Applied Mathematics. I am interested in building a better understanding of empirical phenomena in deep learning (e.g. double descent, loss landscapes) and developing methodological advances from these insights (e.g. deep ensembles). I hold an MSc in Machine Learning from University College London and a BSc in Statistics from University College Dublin. I have previously worked as a Data Scientist at Accenture’s global center for R&D innovation and in the Insight Research Center for Data Analytics. Email at: aj659 [at] cam [dot] ac [dot] uk.
🗞️ News 🗞️
Sep 2023 → Two papers accepted for NeurIPS2023! One oral (top 0.5% of submissions) that provides an alternative take on double descent suggesting that it may not be so contradictory from classic statistical notions of model complexity. Then, a poster that investigates if deep ensembles can be trained jointly rather than independently.
Jan 2023 → Two papers accepted! 🥳 One at AISTATS23 ([paper]) and one at ICLR23 ([paper]). These papers explore self-supervised learning for conformal prediction and a new regularizer for neural networks, respectively. I look forward to presenting these with my co-authors!
April 2022 → I have officially started a PhD in Machine Learning in the University of Cambridge under the supervision of Mihaela van der Schaar!
Jan 2022 → First paper accepted! 🎉 Work done during my masters thesis under the supervision of Timos Moraitis and Pontus Stenetorp has been accepted as a spotlight (top 5% of submissions) at ICLR22. This paper took a neuroscience inspired approach to improve the accuracy-efficiency trade-off in RNNs.
Dec 2021 → Graduated 🎓 I have officially graduated with an MSc in Machine Learning from UCL. I was also a recipient of a Dean’s list award for “outstanding academic performance”.
📚 Research 📚
Please find some of my publications below (a more up-to-date list can be found on google scholar).
“*” denotes equal contribution.
Preprints
- A. Curth, A. Jeffares, M. van der Schaar. Why do Random Forests Work? Understanding Tree Ensembles as Self-Regularizing Adaptive Smoothers. [preprint]
Conferences
- A. Curth*, A. Jeffares*, M. van der Schaar. A U-turn on Double Descent: Rethinking Parameter Counting in Statistical Learning. NeurIPS, 2023 - Oral (top 0.5%). [paper] [code]
- A. Jeffares, T. Liu, J .Crabbé, M. van der Schaar. Joint Training of Deep Ensembles Fails Due to Learner Collusion. NeurIPS, 2023 [paper] [code]
- N. Seedat*, A. Jeffares*, F. Imrie, M. van der Schaar. Improving Adaptive Conformal Prediction Using Self-Supervised Learning. AISTATS, 2023 [paper] [code]
- A. Jeffares*, T. Liu*, J. Crabbé, F. Imrie, M. van der Schaar. TANGOS: Regularizing Tabular Neural Networks through Gradient Orthogonalization and Specialization. ICLR, 2023 [paper] [code]
- A. Jeffares, Q. Guo, P. Stenetorp, T. Moraitis. Spike-inspired rank coding for fast and accurate recurrent neural networks. ICLR, 2022 - Spotlight (top 5%). [paper] [code]