Autonomous Cars Can Predict How Selfish Your Driving Is
By TheWAY - 11월 22, 2019
Self-driving cars could soon be able to classify you as a selfish or altruistic driver. While this might bruise some egos, researchers from MIT CSAIL claim that this will make autonomous vehicles (AVs) much safer when driving alongside humans. Predicting how humans might behave, and adjusting an algorithm’s reasoning based on how selfish or selfless their behavior might be, could dramatically reduce accidents between AI-enabled vehicles and humans.
Properly integrating AI technology with the complicated and nuanced world of human behavior is a huge barrier to overcome, especially in applications that can make a difference between life or death. Apart from making self-driving cars safe enough for our streets, teaching AI how to comprehend the less quantifiable parts of life could give AI the ability to help humans in roles it previously could not handle, and could advance AI applications in general.
Driving amongst us
The new study, headed up by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), aims to teach AI how to classify personalities based on tools used in social psychology. The team used a metric called Social Value Orientation (SVO), which represents the degree to which someone is selfish (“egoistic”) versus altruistic or cooperative (“prosocial”), and then taught the AI system to estimate the SVOs of different drivers. Classifying a driver as egoistic or prosocial tells the system how likely they are to drive aggressively (for example, running a red light) or to be more passive on the road (such as slowing down to let someone turn into the road). This allows an autonomous driving system to comprehend signals that humans understand from experience—such as what it means when someone is speeding close behind you in the outside lane, or leaving space in a queue of traffic.
In tests that simulated merging lanes and making unprotected left turns across traffic, the system could predict the behavior of other cars 25% more effectively than current self-driving systems. In the left-turn simulations, for example, the system knew when to wait for a selfish driver and when to turn ahead of an altruistic one. This is a promising glimpse at what the AI could do in real driving conditions, but it is nowhere near ready to be rolled out. Instead, the team suggest that a partial version could be used in the near term to complement existing in-car AI, for example to highlight an aggressive driver entering someone’s blind spot. Training AI with this rudimentary social awareness helps to overcome a major obstacle for autonomous vehicles (AVs) and other human-facing AI applications. Removing uncertainty as to what humans might do will make these AI systems more ‘confident’, and opens up a range of possibilities for AI that can interpret and predict human behavior.
Binary behavior
The ability to understand (or at least quantify) human behavior is still a major sticking point for AI, and the team hopes to extend their training set to include pedestrians, cyclists, and other road users to expand the system’s understanding. “Creating more human-like behavior in AVs is fundamental for the safety of passengers and surrounding vehicles,” says graduate student Wilko Schwarting, who was lead author on the new paper published in the Proceedings of the National Academy of Sciences (PNAS), “behaving in a predictable manner also enables humans to understand and appropriately respond to the AV’s actions.”
This aspect of the system, that AI behaving more like us will make it easier for humans to interpret its actions, implies a much wider application for this technology than just AVs. An algorithm that can not only understand human behavior, but can also react and display an appropriate “human” response would make AI much more versatile and far better suited to time-intensive care work. When caring for dementia patients, for example, an AI system similar to this could potentially detect when a person was behaving normally or when their mood was erratic, to ascertain when to call a human medical professional. Outside of service-based roles, industrial co-bots that could predict human behavior, could work more safely and effectively amongst humans, and even replace the most dangerous roles if social cues and signals were understood - the volume and urgency in a foreman’s command for example.
Human understanding
While the MIT CSAIL team may not have captured human behavior exactly—a feat which would be one step removed from artificial general intelligence (AGI)—their work toward quantifying whether someone is likely to act selfishly or altruistically will certainly help to advance AVs, and could allow AI to be used in far more sensitive applications than it can currently handle. Creating AI that can operate safely and autonomously amongst humans without misunderstanding social signals or acting too cautiously in an uncertain situation will make AI and particularly robots far safer and more versatile. This particular system may be in its infancy, but I look forward to seeing how far this level of understanding can be developed, and what applications it may serve in the near future.
0 개의 댓글