Gellért Máttyus

Gellért Máttyus

Senior Machine Learning Engineer - Artificial Intelligence Development Center at Continental

Talks

Interpretable AI: Fusing deep learning and classical methods

To provide robust AI solutions for autonomous driving, it is essential to have a good understanding of the system and to be able to traceback errors while also dealing with the many edge cases occuring in the real world. Modern AI based solutions (i.e. deep learning) can achieve excellent performance when sufficient traning data is available. However, tracing back errors in such systems is difficult, since the learned internal representations are hardly interpretable. Classical methods (e.g. optimization based estimations) can provide interpretable results and incorporate our prior knowledge of the world, thus requiring less or even no traning data. However, these methods do not scale well with more data and they cannot handle problems with a large number of edge cases. In my presentation I will discuss briefly these two approaches and how they appear in autonomous driving projects. Then, I will show how these can be used jointly to levarage the benefits of both to build a robust system.