Robert is leading the Deep Learning for Point Cloud and Fusion team, working on research and development of machine learning based multi-sensor fusion technologies. The AI based products enabled by the team brings cutting edge solutions for safety critical autonomous driving applications.
Previously Robert served a role in the intersection of Quantitative Finance and Machine Learning where he helped NY and London based clients on site to take advantage of machine learning and analytics and be prepared for changes happening in global markets.
Robert has backgrounds in Computer Engineering, Machine Learning and Business Intelligence from BME and AIT.
LIDARs provide accurate depth measurements but with low resolution and missing texture color information compared to cameras. Cameras provide superior resolution and color sensing but no depth measurements. We create a LIDAR-Camera Low-level Fusion system with automotive grade solid state LIDARs and Cameras with Deep Learning to combine and learn from the two complementary information modalities: a sparse point cloud and a camera image. The resulting real-time low-level multi-sensor fusion enables a superior world modeling capability around the car.