I'm a Senior Robotics Engineer at Volkswagen, specializing in autonomous systems that integrate perception, planning, and control. I work across both robotics and self-driving cars—treating them as embodied agents that must sense, reason, and act in complex, real-world environments. My focus is on building scalable, real-time autonomy by combining traditional control with modern learning-based methods.
My technical foundation combines Advanced Motion Planning & Control Systems with cutting-edge AI technologies. I develop sophisticated algorithms for autonomous navigation, trajectory optimization, and real-time control while leveraging modern deep learning architectures including transformer-based models, multimodal AI systems, and reinforcement learning frameworks. This unique combination allows me to build robots that don't just execute predefined tasks, but can reason about their environment, understand natural language instructions, and adapt their behavior in real-time.
Throughout my career, I've worked on diverse robotic platforms including autonomous vehicles, aerial drones, humanoid PyTorchndustrial automation systems. I specialize in developing end-to-end AI pipelines that integrate computer vision, natural language processing, and robotic control to create truly intelligent autonomous agents. My experience includes deploying production-ready systems using ROS/ROS2, PyTorch, and modern MLOps practices, with a strong emphasis on safety-critical applications and real-time performance optimization.
I'm particularly excited about the convergence of foundation models and robotics, working on projects that involve multimodal learning, embodied AI, and human-robot interaction. My goal is to create robots that can seamlessly integrate into human environments, understand context through multiple sensory modalities, and execute complex tasks through natural language communication—ultimately advancing the field toward more intuitive and capable autonomous systems.