Predicting the behavior of others along the way using artificial intelligence

Researchers have created a machine learning system that effectively predicts the future trajectories of many road users, such as drivers, cyclists and pedestrians, which could allow an autonomous vehicle to move more safely through city streets. If the robot is going to safely drive a vehicle in downtown Boston, it should be able to predict what nearby drivers, cyclists and pedestrians will do. Credit: MIT

The new machine learning system could one day help driverless cars predict the next movements of drivers, pedestrians and cyclists in real time.

People can be one of the biggest obstacles in the way of fully autonomous vehicles operating on city streets.

If the robot is going to safely drive a vehicle in downtown Boston, it should be able to predict what nearby drivers, pedestrians and cyclists will do.

However, predicting behavior is a challenge, and modern artificial intelligence solutions are either too simplistic (they may assume pedestrians always follow a straight line), too conservative (to avoid pedestrians, the robot just leaves the car in the parking lot), or can only predict the next steps of a single agent (roads usually carry many users at once).

Machine Learning Driving Simulation

Machine Learning Road Driving Simulation

These simulations show how the system the researchers developed can predict the future trajectories (shown using red lines) of the blue vehicles in complex traffic situations involving other cars, bicyclists, and pedestrians. Credit: MIT

Their behavior-prediction framework first guesses the relationships between two road users — which car, cyclist, or pedestrian has the right of way, and which agent will yield — and uses those relationships to predict future trajectories for multiple agents.

These estimated trajectories were more accurate than those from other machine-learning models, compared to real traffic flow in an enormous dataset compiled by autonomous driving company Waymo. The MIT technique even outperformed Waymo’s recently published model. And because the researchers broke the problem into simpler pieces, their technique used less memory.

“This is a very intuitive idea, but no one has fully explored it before, and it works quite well. The simplicity is definitely a plus. We are comparing our model with other state-of-the-art models in the field, including the one from Waymo, the leading company in this area, and our model achieves top performance on this challenging benchmark. This has a lot of potential for the future,” says co-lead author Xin “Cyrus” Huang, a graduate student in the Department of Aeronautics and Astronautics and a research assistant in the lab of Brian Williams, professor of aeronautics and astronautics and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Joining Huang and Williams on the paper are three researchers from Tsinghua University in China: co-lead author Qiao Sun, a research assistant; Junru Gu, a graduate student; and senior author Hang Zhao PhD ’19, an assistant professor. The research will be presented at the Conference on Computer Vision and Pattern Recognition.

Multiple small models

The researchers’ machine-learning method, called M2I, takes two inputs: past trajectories of the cars, cyclists, and pedestrians interacting in a traffic setting such as a four-way intersection, and a map with street locations, lane configurations, etc.

Using this information, a relation predictor infers which of two agents has the right of way first, classifying one as a passer and one as a yielder. Then a prediction model, known as a marginal predictor, guesses the trajectory for the passing agent, since this agent behaves independently.

A second prediction model, known as a conditional predictor, then guesses what the yielding agent will do based on the actions of the passing agent. The system predicts a number of different trajectories for the yielder and passer, computes the probability of each one individually, and then selects the six joint results with the highest likelihood of occurring.

M2I outputs a prediction of how these agents will move through traffic for the next eight seconds. In one example, their method caused a vehicle to slow down so a pedestrian could cross the street, then speed up when they cleared the intersection. In another example, the vehicle waited until several cars had passed before turning from a side street onto a busy, main road.

While this initial research focuses on interactions between two agents, M2I could infer relationships among many agents and then guess their trajectories by linking multiple marginal and conditional predictors.

Real-world driving tests

The researchers trained the models using the Waymo Open Motion Dataset, which contains millions of real traffic scenes involving vehicles, pedestrians, and cyclists recorded by lidar (light detection and ranging) sensors and cameras mounted on the company’s autonomous vehicles. They focused specifically on cases with multiple agents.

To determine accuracy, they compared each method’s six prediction samples, weighted by their confidence levels, to the actual trajectories followed by the cars, cyclists, and pedestrians in a scene. Their method was the most accurate. It also outperformed the baseline models on a metric known as overlap rate; if two trajectories overlap, that indicates a collision. M2I had the lowest overlap rate.

“Rather than just building a more complex model to solve this problem, we took an approach that is more like how a human thinks when they reason about interactions with others. A human does not reason about all hundreds of combinations of future behaviors. We make decisions quite fast,” Huang says.

Another advantage of M2I is that, because it breaks the problem down into smaller pieces, it is easier for a user to understand the model’s decision-making. In the long run, that could help users put more trust in autonomous vehicles, says Huang.

But the framework can’t account for cases where two agents are mutually influencing each other, like when two vehicles each nudge forward at a four-way stop because the drivers aren’t sure who should be yielding.

They plan to address this limitation in future work. They also want to use their method to simulate realistic interactions between road users, which could be used to verify planning algorithms for self-driving cars or create huge amounts of synthetic driving data to improve model performance.

“Predicting future trajectories of multiple, interacting agents is under-explored and extremely challenging for enabling full autonomy in complex scenes. M2I provides a highly promising prediction method with the relation predictor to discriminate agents predicted marginally or conditionally which significantly simplifies the problem,” wrote Masayoshi Tomizuka, the Cheryl and John Neerhout, Jr. Distinguished Professor of Mechanical Engineering at University of California at Berkeley and Wei Zhan, an assistant professional researcher, in an email. “The prediction model can capture the inherent relation and interactions of the agents to achieve the state-of-the-art performance.” The two colleagues were not involved in the research.

Reference: “M2I: From Factored Marginal Trajectory Prediction to Interactive Prediction” by Qiao Sun, Xin Huang, Junru Gu, Brian C. Williams and Hang Zhao. 28 March 2022, Computer Science > Robotics.

This research is supported, in part, by the Qualcomm Innovation Fellowship. Toyota Research Institute also provided funds to support this work.

Back to top button