Driver assistance and autonomous driving technologies have made significant progress over the past decade. Much of the research has been devoted to monitoring the external environment, while not nearly as much attention has been paid to the interior. Interior monitoring increases safety, comfort, and convenience for all vehicle occupants, especially in the case of autonomous shared vehicles.
The
ISM will be a half-day online event, Oct. 24, at
14:00-14:10 | Welcome Session |
14:10-14:55 | Keynote I: João Carreira: Multimodal Scene and Action Understanding |
14:55-15:15 | Oral Presentation: Semi-automatic pipeline for large-scale dataset annotation task: a DMD application |
15:15-15:35 | Oral Presentation: Detecting Driver Drowsiness as an Anomaly Using LSTM Autoencoders |
15:35-15:55 | Project KARLI - Level-Compliant Driver Behavior Monitoring – Approaches from German KARLI project |
Coffee Break | |
16:05-16:50 | Keynote II: Mohammad Mavadati: Cabin Monitoring Systems: The Future of Safer and More Engaging Mobility |
16:50-17:10 | Oral Presentation: Personalization of AI models based on federated learning for driver stress monitoring |
17:10-17:30 | Oral Presentation: XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model |
17:30-17:50 | Project Easy Ride - Occupant emotion monitoring system |
17:50-18:00 | Closing Session and Prize Announcement |
Keep in mind the following dates and deadlines:
*All deadlines are 8:59PM CET/11:59AM PST.
More information on camera-ready paper submissions will be released soon.
Bio: João Carreira has been a Research Scientist at DeepMind in London since 2016. Before that he received his Ph.D. from the University of Bonn in Germany in 2012 and was a postdoctoral fellow in UC Berkeley (2014-2015). Early on he worked on object segmentation and 3D reconstruction, more recently in action recognition, video understanding and even more recently on general perception models -- highlights including the popular Kinetics dataset, I3D and Perceiver models.
Bio: For the past two decades, Dr. Mavadati has been fascinated by how humans gather
and interpret visual information, to not only survive but also improve quality of life. He has dedicated his career to researching
and developing technologies that can provide a deep understanding of complex and nuanced human behaviors - all with the goal of bridging
the gap between humans and machines.
During his graduate studies at University of Denver, Dr. Mavadati leveraged image analysis techniques, to better understand human
behaviors and affective states. As part of his Ph.D. research, he utilized automated Computer Vision techniques to detect and track
facial expressions of humans while interacting with computers and robots.
In 2015 Dr. Mavadati joined Affectiva, the startup that invented the field of Emotion AI. In 2021 Affectiva was acquired by Smart Eye,
the global leader in Human Insight AI. At Affectiva Dr. Mavadati led the team that worked on building technology that can better
understand emotional states of people in a wide range of use cases, including Media Analytics and Automotive. In the past few years,
Dr. Mavadati, and the team at Affectiva and now Smart Eye, have been mainly focused on developing vision-based solutions to detect
and understand the visual information gathered from a vehicle interior to help create safer and more enjoyable experiences for all
vehicle occupants.
Dr. Mavadati has lived and worked in the USA for the past twelve years. In his free time, he enjoys traveling, hiking, gardening,
exploring authentic resturants and playing tennis.