Tuesday, October 9, 2012

Google deploys ‘virtual brain’ for smarter searches

Google is using its massive computing power to simulate the human brain to improve the results of its speech recognition engine…


Source google.com

After being taught to recognize cats, people and other images by watching YouTube videos, Google’s virtual brain technology is to be put to the ultimate test: making Google’s products smarter.
 
The virtual brain technology, which is patterned after how brain cells operate, will first focus on speech recognition, according to an article in the Technology Review.
 
“Most people keep their model in a single machine, but we wanted to experiment with very large neural networks. If you scale up both the size of the model and the amount of data you train it with, you can learn finer distinctions or more complex features,” said Jeff Dean, an engineer helping lead the research at Google.
 
Google’s learning software simulates connected brain cells that form a neural network that teaches itself to react to incoming data.
 
Such neural networks can learn without need for human assistance, and can go beyond research demos and be used in the field.
 
Last June, Google made headlines when its engineers publicized the results of an experiment involving 10 million YouTube video images.
 
The simulated brain cells involved 16,000 processors across 1,000 computers running for 10 days.
 
When applied to speech recognition, the neural network is expected to benefit Android, Google’s smartphone operating system, as well as its search app for Apple devices.
 
So far, the neural net is working on U.S. English.
 
“We got between 20 and 25 percent improvement in terms of words that are wrong. That means that many more people will have a perfect experience without errors,” said Vincent Vanhoucke, a leader of Google’s speech-recognition efforts.
 
Other Google products
 
The neural network is expected to boost other Google products, such as its image search tools that can understand the contents of a photo without having to check its accompanying text.
 
Even Google’s self-driving cars and Glasses are expected to benefit from such software, Technology Review said.
 
Next steps
 
Dean said his team is now testing models that understand both images and text together.
 
“You give it ‘porpoise’ and it gives you pictures of porpoises. If you give it a picture of a porpoise, it gives you ‘porpoise’ as a word,” he said.
 
Another next step could be to have the neural net learn the sounds of words, leading to speech recognition that can get extra clues from video, Technology Review said.
 
It added Google’s self-driving cars can also benefit by understanding their surroundings through the real-time data they collect.
 
Yoshua Bengio, a professor at the University of Montreal who works on similar machine-learning techniques, said Google’s work is a step closer to creating artificial intelligence that can match animal or soon, human intelligence.
 
Bengio said Google’s neural networks work similarly to the visual cortex in mammals - the part of the brain that processes visual data.
 
“There’s no way you will get an intelligent machine if it can’t take in a large volume of knowledge about the world,” he said.
 
But for now, he said Google’s neural networks still cannot perform many things necessary to intelligence, such as reasoning with information collected from the outside world.
 
For his part, Dean said Google’s neural networks have humans beat in some areas.
 
“We are seeing better than human-level performance in some visual tasks,” he said.
via http://www.speechtechnologygroup.com/speech-blog - Google is using its massive computing power to simulate the human brain to improve the results of its speech recognition engine… Source  google.com After being taught to recognize cats, people and other images by watching YouTube videos, Google’s virtual brain technology is to be put to the ultimate ...

No comments:

Post a Comment