What exactly is artificial intelligence? You might be surprised to learn that the answer to this question is not exactly as straightforward as you may think. Part of the reason why this is the case is due to the ever-changing definition of what constitutes AI.
One of the driving factors that leads to AI being so difficult to define is known as the ‘AI Effect’. The AI Effect phenomenon goes as follows: once AI is capable of achieving something previously thought to require human intelligence or intuition, it is written off as a result of computing power and not a dynamic computer program.
For example, in a previous article, we discussed the milestone of Deep Blue defeating a world champion chess player. This directly led to an argument as to whether the defeat was a result of AI or of just brute force math calculations. Once people realized that AI could be boiled down to complex mathematics, the mystique surrounding the AI was suddenly gone.
The moment something can be achieved by AI, it is no longer seen as being AI, which leads to debates surrounding the true definition of what is truly artificially intelligent.
Because people have a natural tendency to mystify AI and associate it with super-human intelligence (partly due to pop-culture), they may feel like there isn’t much progress being made in the field. This common misconception can be traced to the difference between artificial narrow intelligence (ANI or weak AI) and artificial general intelligence (AGI or strong AI). The public doesn’t quite realize that they are constantly surrounded by ANI, whether its trading bots, voice assistants, smart thermostats, or even robotic vacuums. This leads to constant disappointment because their perception of what AI is may only include strong AI technologies.
Many people don’t see these ANI programs as being “intelligent” in a human sense and write them off as just being products of clever programming by a human. This is in part due to both human intelligence being difficult to define, but also due to humans wanting to believe that there is something truly unique and intangible about human intelligence. Whether there is something truly unique about our intelligence isn’t clear, and this is part of the problem—we compare AI to ourselves but yet we don’t fully understand our own intelligence. While there are leaps and bounds being made every day in the AI research and development community, the progress is not as noticeable for the average person. In reality, the road to AGI will most likely be a very slow and progressive one, unlike what we see in movies and TV.
What is surprising is that this effect is not unique to the general public—it even finds its way into the advanced AI research circles. As researchers come to better define the capabilities and limitations they struggle with defining it under the umbrella of “intelligence” because these AI machines and programs simply don’t “think” like humans. The major issue here is that we don’t have the words to distinguish between these two very different forms of intelligence.
To make the problem even worse, in a recent study it was found that almost half of self-proclaimed "AI startups" didn’t include any technologies that were truly AI. The term AI has been used and abused by many companies so that they can ride the hype-wave of artificial intelligence. Whether it’s to secure more funding from investors or trick customers into thinking that their company’s technology is truly at the cutting edge of tech, this trend has done a huge disservice to companies and researchers that are truly using or developing AI. When marketers over-hype and over-promise on technologies they don’t fully understand, this inevitably leads to disappointed investors or customers, which in turn leads to less funding towards AI as a whole.
We need to combat these trends and come to a better understanding of what AI is, what it isn’t, and what it can be. And in order to do this, we need to better define what constitutes intelligence and become better at distinguishing between human intelligence and machine intelligence. As the public becomes more educated on these definitions, we can allow researchers to develop AI without having to redefine what AI is every time we take a step forward. We play a dangerous game when we don’t consider ANI machines to be artificially intelligent and we can’t continue to write off the progress made in this field.