MIT researchers develop system to give driverless cars “Human-like” reasoning

Boston: Researchers from Massachusetts Institute of Technology have developed a system that enables driverless cars to navigate through new, complex environments by using maps and visual input. Current driverless cars do not have this basic reasoning and lack this ability.

The system has the ability to detect mismatches between maps and features of the road, thus determining whether a contradiction exists or vice versa. The system learns the steering pattern of human drivers using video camera data and simple GPS-like maps. It relies on complex maps made by 3D scans which are computationally intensive to generate and processed on the go.

“Our objective is to achieve autonomous navigation that is robust for driving in new environments,” said Daniela Rus from Massachusetts Institute of Technology (MIT).

The system, installed in a driverless Toyota Prius was initially trained by a human operator. The car was equipped with several cameras and basic GPS navigation system. It first collected data from the suburban streets and analysed the road structures and obstacles on the way.

On testing autonomously, the system successfully navigated the car in a pre-planned path in a different forested area designed for testing autonomous vehicles.

“With our system, you don’t need to train on every road before hand. You can download a new map for the car to navigate through roads it has never seen before,” said Alexander Amini from MIT.

The system uses a machine learning model called a Conventional Neural Network (CNN), which is used very commonly in systems requiring image recognition. While training, CNN learns how to steer from human by correlating it with the curvature of the road and observing the map simultaneously. While in other situations like straight roads, four-way or T-shaped intersections, forks and rotaries it learns the most common used steering pattern used in these situations.
“In the real world, sensors do fail. We want to make sure that the system is robust to different failures of different sensors by building a system that can accept these noisy inputs and still navigate and localize itself correctly on the road”, Amini said.

The researchers say that the system uses maps that are easy to store and process. Typically the autonomous control systems use LIDAR scans to create complex maps that require large memory (for instance a LIDAR map of a city like San Francisco takes about 4 TB of memory while the map used in this system can store the entire world’s data in just 40 GB).

By:- Nikhil Vatsa

    ssss

    Leave a Comment

    Related posts