Artificial intelligence has probably already drastically changed your life whether you realize it or not.  Right now, the AI that exists in our world is referred to as Artificial Narrow Intelligence or ANI.  Some examples of ANI that you may be familiar with are Siri, facial recognition software, and self driving cars. ANI are excellent at accomplishing simple tasks, but they are only good at one task at a time. While AI’s abilities may seem rudimentary now, one can’t help, but wonder: How long until AI reaches a point where it is on par with human intelligence? AI that has reached human intelligence is referred to as Artificial General Intelligence or AGI. But why stop there? Could AI reach a point where it surpasses human intelligence? Once AI  reaches the point that it is more capable than the human mind it is called Artifical Superintelligence or ASI. Now that we have covered the three stages of AI evolution you may be questioning whether or not we will see AI advance to these latter stages of intelligence in our lifetime. And when we do, will we be able to control it? What will this mean for the human species?  Will we be taken over by AI? What will be the point of humanity when AI surpasses the capabilities of the human mind? This is, understandably, a popular topic of discussion in recent years.

One of the world’s leading researchers of AI, Dr. Ben Goertzel, discussed his thoughts on the future of AI when he was interviewed on Joe Rogan’s podcast. Joe Rogan begins with the statement “People are really excited about it or they are really terrified of it. It seems to be the two responses. Either people have this dismal view of these robots taking over the world or or they think it’s going to be some amazing sort of symbiotic relationship that we have with these things. It’s gonna evolve human beings past the monkey stage that we are at right now.”

Dr. Goertzel responds by saying “Yeah I tend to be on the latter, more positive, side of this dichotomy, but I think one thing that has struck me in recent years is many people are now, you know, mentally confronting all the issues regarding AI for the first time. I have been working on AI for three decades and I first started thinking about AI when I was a little kid in the late 60’s and early 70’s when I saw AI and robots on the original Star Trek. So I guess I have had a lot of cycles to process the positives and negatives of it where it is now like suddenly most of the world is thinking through all this for the first time and, you know, when you first wrap your brain around the idea that there may be creatures ten thousand or a million times smarter than human beings, at first, this is a bit of a shocker. It takes a while to internalize this into your world view.”  When asked how long Dr. Goertzel thinks it will take before AI reaches superhuman general AI he speculates a timeframe of 5-30 years.

While Dr. Ben Goertzel has relievingly positive opinions on the future of AI, Elon Musk has expressed that he isn’t as optimistic. He stated during an interview that he believes we are summoning the demon when it comes to AI creation. Musk also stated that Artificial Superintelligence could be far more dangerous than nuclear weapons.   

How can we predict which path AI will take? Many argue that the future of AI will be up to us. The creators of it. Will humanity steer AI towards a path of positive evolution or destruction?  We haven’t destroyed ourselves with nukes yet, so one could argue that we have the ability to exhibit restraint when it comes to the implementation of something that has the potential to be this destructive. We can only hope that this same restraint will be exhibited as we continue to develop AI technologies. 

But who exactly gets to develop and benefit from AI as it evolves? Dr. Goertzel has been considering this question for a long time.  This inspired him to create a project called SingularityNET. According to the project’s website, SingularityNET is the world’s first decentralized AI network that “lets anyone create, share, and monetize AI services at scale.” using blockchain technology.  To put it in simple terms: SingularityNET is a global AI marketplace that aims to create a transparent network so that anyone can have the ability to govern, benefit from, and take part in the development of AI. It also prevents AI development from only being able to operate within a single company, infrastructure or industry. This open platform would, instead, allow AIs with different programs to cooperate with each other so that they may evolve to become more useful to all of humanity. This decentralized network would allow anyone in the world to insert their AI programs to the platform so that they can take part in the evolution of this single general purpose AI and profit from the usefulness of the AI they contribute.

SingularityNET would allow us to take part in the evolution of AI as a collective, but Elon Musk poses another question. Could we actually physically merge with AI? Elon worries that if this doesn’t happen then we will inevitably be outsmarted by the AI and essentially become useless as a species.  Hence his creation of Neuralink. Neuralink would allow us humans to implant computers directly into our brains to create a BMI or brain machine interface. BMIs have the potential to help people with a broad spectrum of clinical disorders as well as our ability to control and process the world around us. Neuralink works by inserting tiny electrodes which have the ability to read from and insert data into the brain. In theory, it could even allow us to be connected to our smartphones as if they are an extension to our brains.  With this upgrade we would be able to sharpen our minds and expand consciousness to an extent never thought possible. The possibilities are virtually endless, but at what cost?

Many worry about the negative consequences of merging with AI in this way. At what point do we lose our humanity and become something entirely different? Would it be possible for our brain to be hacked or controlled if we had a computer implanted to it? Alternatively some share the concerns of Elon Musk that AI may one day overpower us and that we become obsolete if we don’t adopt a technology like Neuralink. Ultimately, it is up to humanity to decide what path we will take and the outcome still remains uncertain. Do you feel optimistic about the future relationship between AI and humanity? Would you consider receiving a Neuralink implant? Feel free to share your thoughts in the comment section.

Share.

Comments are closed.

Exit mobile version