I am a PhD candidate at Politecnico di Milano, working under the supervision of Prof. Marcello Restelli at the RL^3 Group.

You can check out my (mostly up-to-date) CV here, but for the freshest updates, my Scholar or BlueSky profiles might be more reliable. And if you’re curious about anything, feel free to drop me an email, I’m always happy to chat!

My research is in Reinforcement Learning (RL), and I am especially interested in getting around some of the usual pain points, like relying on massive amounts of data, training everything from scratch, or needing centralized schemes when multiple agents are involved. Real-world applications don’t always play by those rules, unfortunately.

Lately, I have been focused on what we can do before the actual task is even defined, a field called unsupervised RL: things like pre-training models that make RL agents more general, more adaptable, and with more diverse behaviors. My goal is to develop methods that scale, generalize well, and are ready to tackle real-world messiness head-on.

To that end, I have dived into topics like partial observability, multi-agency, and decision-making under general utility functions. I have worked with Siemens to bring scalable multi-agent RL into industrial production scheduling, and more recently I have been collaborating with a London-based start-up, Inephany, on how to leverage RL for hyperparameter optimization in LLMs!

News et al.

(2025)

(2024)

(2023)

(2022)