Presentation
Since October 2023, I have been a PhD student in the CMAP team at École Polytechnique (Paris), working at the intersection of reinforcement learning and federated learning. I am supervised by Eric Moulines, Alain Durmus, and Karim Abed-Meraim, and I also collaborate closely with Paul Mangold and Daniil Tiapkin.
Before that, I studied at Télécom Paris and completed the MVA Master’s program.
My research interests include
- Federated Reinforcement Learning
- Theory of Reinforcement Learning
- Federated Learning and Personalization
News
- January 2026: Two of my papers as a first-author were accepted at ICLR 2026 and AISTATS 2026, respectively: “Beyond Softmax and Entropy: Convergence Rates of Policy Gradients with f-SoftArgmax Parameterization & Coupled Regularization” and “On Global Convergence Rates for Federated Softmax Policy Gradient under Heterogeneous Environments”! I will attend both conferences to present my work. Feel free to come chat if you’d like to discuss federated learning or policy gradients!
- January 2025: My first paper as a first-author ‘Federated UCBVI: Communication-Efficient Federated Regret Minimization with Heterogeneous Agents’ has been accepted at AISTATS 2025!
- September 2024: My first paper ‘SCAFFLSA: Taming Heterogeneity in Federated Linear Stochastic Approximation and TD Learning’ has been accepted at NeurIPS 2024!
- October 2023: I am starting a PhD at Ecole Polytechnique!
