Select Page

I am a neuroscientist and AI specialist with over 10 years of experience in analysing multidimensional data for pattern recognition and building brain-inspired computational models of sensory processing.

Currently I am working for Cyanapse, a tech company that I have co-founded in 2016. At Cyanapse we build software tools to augment the perception of images and improve communication through visuals using cutting-edge technologies such as deep learning. I have been mainly doing R&D with a focus on development of algorithms and pipelines for real-time feature extraction and manipulation of spatio-temporal data using methods that combine brain research, machine learning and computer vision.

As a neuroscientist I am interested in how neurons wire together to make sense of sensory signals, how they change their wiring through learning, how their organisation can be revealed by investigating spatiotemporal activity patterns and how their behaviour can be imitated by machines. In order to study realistic scenarios, I worked on development of tools for simulation of large scale brain models on GPUs, and I developed methods to analyse high dimensional data. Previously I worked at the University of Sussex as a postdoctoral fellow on the Green Brain Project. My role in the project involved computational modelling of honeybee olfactory pathways related to reinforcement learning and decision making, and interaction of this model with the world using chemosensors. I have also been a developer of GeNN – GPU-enhanced Neuronal Network simulation environment based on code generation for NVIDIA CUDA. Prior to that, I was a PhD student in Neuroscience at UNIC, CNRS Gif-sur-Yvette, fully funded by Paris Neuroscience School (ENP). During my PhD, I performed voltage sensitive dye imaging recordings in-vivo on primary visual cortex, and developed methods to denoise and reduce dimensionality of the imaging data in order to study visual cortical dynamics.