In this blog post we’re going to look at the possibility of building a super-intelligent machine and then assuming it’s possible; the probability of roughly when it could happen. It is well agreed upon within the AI community that we are still far away from having the technical capabilities of making this happen or even simulating the brain of the simplest of animals. However, that shouldn’t stop us from thinking about when it could happen.
In Oxford Philosophy Professor Nick Bostrom’s book “Superintelligence” he provides some data on the opinions of some of the leading AI researchers and compiled their estimates on the arrival of Human Level Machine Intelligence or HLMI and then of superintelligence. According to the experts, they felt there was a 10% chance we see HLMI by 2022, a 50% chance by 2040, and 90% probability it exists by 2075. As for the time, it takes for a superintelligence to emerge after HLMI; the experts felt there was a 10% chance that it will be developed within two years of HLMI and a 75% chance it will exist within 30 years. So assuming that we develop it on the slower side of things and have HLMI by 2075, that means this group of experts think there is a 75% chance we’ll see a superintelligent AI by 2105. This, of course, all can only be taken at face value, the estimates were based on a lot of assumptions and deliberately don’t take into account specific hurdles the community faces. All the same, it proves at the very least that our grandchildren may have to face the consequences or reap the benefits of the work being done now in the AI field.
Now I subscribe to the end of the spectrum that believes these milestones will come sooner than we think. And I will admit that this could be a result of me both wishing that I could witness this within my lifetime as well as my fear that these milestones could very well bring about our destruction. The AI could “choose” to end us, which no matter how unlikely, is still possible, or we could destroy ourselves in a global war to race towards achieving a General AI, or through any number of other likely scenarios. Ultimately though, I feel this way because I understand the magnitude of the consequences in either direction, good or bad. I want to caveat this all and say that I am certainly more fearful of the humanity’s misuse of artificial intelligence than I am of the creation of a superintelligence and consequences. I think that while it is essential that I bring up the less likely and more extreme scenarios, the more realistic reality is one where governments and corporations use artificial intelligence to unethically advance the capabilities of technology that infringes on the liberties of the average person. But, I’m still hopeful that we can start to talk about these impending changes and how we can all work to ensure our collective future is a bright one.
Mac is currently the Head of IT and Program Instructor at a small Nonprofit that is dedicated to teaching youth life lessons through bicycles. Mac is also currently pursuing a degree in IST from Penn State.