A new system enables autonomous vehicles to find their way using two low-cost cameras. Because relatively expensive Light Detection and Ranging (LiDAR) sensors can be omitted, the development could significantly reduce the cost of autonomous vehicles.
Autonomous vehicles generally deploy sensors such as LiDAR sensors to map their surroundings. These sensors use lasers to create a map of the environment using 3D points to image pedestrians, motorists, obstacles and road signs, among others.
Relatively expensive
LiDAR sensors, however, are relatively expensive and can quickly add $10,000 to the development cost of an autonomous vehicle, states Cornell University. Despite this high price tag, however, the sensors have so far been the only way for autonomous vehicles to safely detect pedestrians, vehicles and other obstacles.
Autonomous vehicles can also use cameras to map their surroundings, combining images from two cameras to create depth. However, the accuracy of this method in detecting objects has so far left much to be desired. It was therefore generally assumed that cameras were simply not accurate enough for this purpose.
Fractional cost
Researchers at Cornell University have developed a system that does make cameras suitable for this purpose. The system works using two low-cost cameras placed on either side of the windscreen. These cameras can detect objects with an accuracy close to LiDAR sensors, while the cameras cost only a fraction.
The key lies in how the collected images are analysed using a neural network. Traditionally, a neural network analyses the front view of images it is presented with. However, the researchers choose to have the images analysed from above, which has a positive impact on the accuracy of the analyses. For instance, a neural network appears to be able to analyse a top view three times more accurately than a front view. This makes a two-camera setup a cost-effective alternative to LiDAR sensors.
'Distorting images'
"With camera images, it is very tempting to look at the front view, since this is what the camera sees," explains Kilian Weinberger, senior lecturer in computer science at Cornell University. "Here, however, lies the problem. If objects are viewed from the front, the way they are processed is distorted, objects become part of the background and their shapes change." If images are analysed from above, however, this is not the case, improving the accuracy of analyses.
Weinberger expects stereo camera systems could be used in cheaper autonomous vehicles as a primary method of navigating objects. Weinberger suspects that in high-end models LiDAR will continue to be used, and stereo camera systems will be used as a backup system.
Learning to stay within driving lane independently
This is not the first time researchers have achieved success using traditional cameras in autonomous vehicles. In July, for instance, researchers at Cambridge University managed to teach an autonomous vehicle to stay within its lane of travel using just one camera. The vehicle was not given any instructions beforehand and thus started the experiment with no prior knowledge.
The car independently searched its way across the road and was corrected by a human instructor as soon as it threatened to leave its lane. Based on these corrections, the vehicle learned to stay within its lane independently in 15 to 20. More information on this project can be found here.
Author: Wouter Hoeffnagel
Source: Cornell University