Many specialists in the field of Artificial Intelligence argue when is AI going to surpass human intelligence. Indeed, there is now an AI Checker Playing system that can play a perfect game of checkers, we know that IBM built a Chess playing machine that beat a top human chess player, thus some say that AI has already surpassed human intelligence.
Of course those are just games and a human mind is capable of multiple intelligences, so when will an AI machine be as smart as the World’s Smartest human? Well, this question has been posed and most AI scientists believe it will come around 2020 or 2030. I completely disagree, why you ask?
Well, you know there sure seem to countless pessimists in the ranks of AI niche scientists, in fact it seems that many humans do not achieve what they seek due to this negative feedback. So we really need Artificial Intelligence to design an artificial intelligent system that can surpass all the humans, who are stuck in linear thought convincing themselves that it cannot be done until a prescribed date.
Who can tell us why everyone is convincing everyone else it cannot be done for 2 decades, why? Just because someone says it cannot be done does not mean it is so. When they say such things it only means that they believe they cannot do it in less time and if they believe that, then they are right, but for others to adopt such a line of reasoning simply does not follow any sort of real logic.
Therefore those who cannot think logically, well why are they in the field of AI which combines various human thought processes with machines of logic? Let’s say the upper end human IQ is no more than 210, which really is not that high when you think about it, why couldn’t we develop a system that mimics human thought processes using many combinations of strategies, why is everyone so adamant about their specific methods, which often can only attain a certain level presently.
Having had many original thoughts on the subject that I have not seen in any research papers, it appears to me that we are about a “half a break through” from cracking the whole thing right now, not in 20 years. It could come at any time, the sooner the better.
The entire subject is interesting really. Those who predict such a long-term point of singularity almost seem to be promoting job security. Twenty years is not good enough, that is unacceptable. We should not promote weakness, laziness, defeatism or attempt to convince ourselves we cannot do something until some far off date when half these niche scientists maybe dead by then? It is time to bring Artificial Intelligent to the forefront now, not in 2-decades. Think on it.