Professor Geoffrey Hinton has been working for Google for the past 2 years and has unveiled that they are now working with thought vectors that may solve some of the challenges inherent in utilizing AI with natural language processing.
For those of you who have not heard of thought vectors they are based on the concept that by ascribing a set of numbers (or vector) for every word, a computer can be trained to understand the actual meaning of the words by defining its position in a meaning space or meaning cloud. So a sentence is not a string of words but a path between words that one can extract meaning from based on the specific set of numbers – or thought vectors.
You may think that this is already possible. You ask Siri a question and her AI provides an answer. However, the reality of Siri and those like her, are not that they really understand the words but more that they know how to find match patterns and find answers based on the word and sentence asked. But this is not necessarily true understanding of the meaning.
If thought vectors achieve their aim, they would be able to extract real meaning much the way humans do, rather than just translating words and assembling them. The issue is which numbers to assign to each word and this is tackled by deep learning which has at its core the concept that computer programs actually learn rather than being taught inflexible rules.
The idea that thoughts can be captured and distilled down to cold sequences of digits is controversial, Hinton said. “There’ll be a lot of people who argue against it, who say you can’t capture a thought like that, he added. But there’s no reason why not. I think you can capture a thought by a vector.” Hinton believes that the “thought vector” approach will help crack two of the central challenges in artificial intelligence: mastering natural, conversational language and the ability to make leaps of logic.