The Human Brain – Google`s Latest Inspiration.
The tech company Google has, for quite some time, been developing “neural networking,” a technique that collects data and uses that data towards other processes, much like the neurons in the human brain do when learning something.
Google say that after recent trials the networks are now ready to be used commercially. This is not the first time that Google has used the process, they have already used neural network data to recognize cats in YouTube videos enabling the computer to identify specific features of the video content, patterns, colours etc, decide what are most relevant and identify what it thought was feline.
The company are now using the neural networks to advance their speech recognition technology for use in their Android devices and other applications, much like Apple`s Siri technology. Google believes that given time the networks will develop far enough to be able to easily detect what is appearing in a photo/video without having to rely on the images surrounding text.
Scientists believe that this software is similar to what neuroscientists believe exists in the visual cortex of mammals.
A professor at the University of Montreal, Yosua Bengio quotes….
“It turns out that the features of the learning networks being used, by Google, are similar to the methods used by the brain that are able to discover objects that exist.”
This is not the first time such technology has been researched, a worldwide collaboration of scientists, professors and academics have, for years, been researching and developing spatial awareness technology for robot arms and similar hardware.
A leading member of this collaboration told Webdoo…
“The idea behind the technology seems simple to the layman but its development is far from easy. Making a robot arm pick up an object from a constant position and place the object in a designated place is simple, there is no need for the camera when the collection and placement points remain the same but what happens if you move the object from its original position? The difficulty comes when you want the arm, fitted with a camera, to actually see the object, determine its position and move to pick it up. After the arm has picked the object up it is again easy to get it to place the object in a pre-determined position but what happens if you place another object there? The arm then has to determine that there is another object there and place the collected object somewhere else.”