The autonomous vehicle experiments being conducted in San Francisco, other US cities and around the world are made possible by variations of AI and software. Most rely on tens of thousands of dollars in sensors and computing hardware as well as maps. Imagry, an Israel-based startup, shuns quite a bit of that. It is vision-based, needs only cameras, and does not have expensive lidar or radar. And in what may be a first, it doesn’t need maps.
Indeed, it seems to sense its way around the same way as a good London cabbie, who only gets a license if they can negotiate the entire city without resorting to maps.
Having to store an entire city, along with its environs, takes too much storage, says Imagry. Their software knows where to go and how to go about it by being able to sense its environment in real time. We watch an autonomous vehicle in Arizona approaching a road crew directing traffic by alternating “slow” and “stop” on a sign, usually a tense situation for any autonomous vehicle. But the Imagry-equipped Kia negotiates the scene smoothly without hesitation or error. In another situation, we see gentle nudges from the person behind the wheel, such as tapping the turn signal, to which the vehicle quickly responds.
Imagry uses “supervised learning,” a type of machine learning where a system is trained on labeled data, a dataset of input-output pairs where each input (e.g., an image) is associated with a known object which presumably also associated with certain behaviors (e.g., speed, ability to change the path). A training process ensues by which the model learns to map inputs to outputs by identifying patterns in the labeled examples. It makes predictions and adjusts its internal parameters to minimize the difference (error) between its predictions and the actual outputs. The goal is to enable the system to generalize and make accurate predictions on new, unseen data based on the patterns it learned during training. Using supervised learning to mimic human driving by training on datasets where the camera images are paired with the corresponding driving actions.
Imagry uses supervised learning because it provides a direct way to train autonomous systems with clear examples of how to respond in specific situations. By leveraging annotated datasets, Imagry can fine-tune its autonomous driving algorithms to handle real-world scenarios.
The what-would-a-good driver-do approach is not entirely a new concept. Elon Musk suggested this approach, according to Walter Isaacson’s biography of Musk. By identifying “good drivers” and learning how they respond to some situations, Musk projected that Teslas could react to any situation and not be limited to the many – though never enough – programmed scenarios autonomous vehicles have to drive around with.
Imagry will be on display at CES 2025 in booth #5976 in LVCC West Hall.