Published By : 22 Dec 2015 | Published By : QYRESEARCH
In places where GPS fails to function, researchers have come up with two new smartphone based systems that will enable the development of driverless cars and help identify the user’s orientation and location.
The best part about these smartphone based systems is that they can help identify different road scene components in real time on a regular smartphone or camera. Sensors, while they perform the same task, actually cost millions.
The only challenge is that presently, these systems have no control over a driverless car. Nevertheless, the ability to make the machine spot and precisely identify its location and its surroundings is a key part of developing autonomous robotics and vehicles.
Professor at University of Cambridge Roberto Cipolla explains how vision is a powerful and vital sense and even driverless cars need it. While it may sound easy, teaching a machine how to see is rather difficult.
One of the systems, SegNet, captures images of a street scene that it has never seen before and categorizes it into 12 segments – including street signs, buildings, roads, cyclists, and pedestrians – in real time. It even has the ability to deal with shadow, light, and night-time environments.
Even though SegNet is primarily trained in urban environments and highways, it performed rather well in initial tests on snowy, rural, or desert environments.
Using the system is also pretty easy. Users can go on to the SegNet website and search or upload an image of any town or city in the world. The system will then label all the components that appear on the road scene. As of now, SegNet has successfully and correctly labeled over 90 per cent pixels of any image, be it night or day. In fact, a PhD student who has had experience working with SegNet has said that the system is impressively good at recognizing different aspects in an image.