Superintelligence

Data Scientist
Valtech

June 15, 2017

Do say: “Data policy and general AI is something to take seriously and watch out for in the future.” Don’t say: “Skynet is coming - run for your lives!”

My wife’s interest in policy and the broader societal implications of the development of AI led to her ordering one of, if not the, foundational books on general AI: “Superintelligence”, by Nick Bostrom.

My interest in machine learning and data science led to me to borrow said book almost from the moment it landed on our doorstep. It’s been an interesting read, and has kept me occupied on trains and in other spare moments. I wouldn’t recommend it for bedtime reading, though, as it doesn’t shy away from being technical!

Below is a brief definition and overview of superintelligence as well as a basic summary of some of the issues and challenges that a superintelligent AI may portend for humanity.

Wireframe brain diagram

What is superintelligence? Superintelligence (strong-AI or general intelligence) is a system that would be able to surpass human brain capacity in every field; imagine the smartest scientist ever born, an unbeatable trivia partner and the most engaging social butterfly, all rolled into one.

Sounds complicated? It is. It’s like comparing the smarts of Einstein with those of your slow colleague at work—and that’s not even close! Superintelligence would be like comparing Einstein’s cognitive capacity to that of ant, only humans are the ants. And, as ants, our inability to comprehend this superintelligence creates a lot of uncertainty and speculation on the topic. Articles written about superintelligence have many qualifiers (could's, would's, should's)—this article included.

How do we make one? It is imagined that at some point in the future, a seed AI will be able to rewrite it's codebase, enabling it to recursively self-improve. In theory, this would allow the AI to become increasingly more intelligent and, without the biological limitations of humans, could lead to what experts call an “intelligence explosion”. And then what?

The rate at which AI can improve itself will greatly determine our ability to keep pace, control and interact with it. Fast self-improvement and the AI might quickly outsmart us—causing us to lose control over it. Slow self-improvement and we might have time to perfect the AI and governance of its use.

So who is watching out for us? Is there someone working to ensure that AI doesn’t get away from us? Good news and bad news here. The short answer is yes, many are aware of the potential downsides. The most vocal lately has been Elon Musk, but many other prominent technologies have raised the alarm. There are now a few institutes looking into these issues, many affiliated with universities where this research is taking place. The bad news is that, unsurprisingly, governments are woefully behind.

Would AI ever be unfriendly? Bostrom warns that superintelligence might not have anthropomorphic goals, citing an AI run paperclip factory that never knows when to stop so it continues to cannibalize resources indefinitely—or a machine that, with the goal of ‘optimising human happiness’, plants electrodes inside our brains. Brain electrodes, that’s something to avoid. Indeed.

You can’t program common sense. Programming typically has very literal, specific task-based code. This won’t be possible with an agile, self-taught AI. So we’ll have to do better and design ways to guide and control the AI in new ways to steer us away from brain electrode type outcomes.

I’m sure we’re a long way from anything yet. Well, almost sure. Most estimates put us at 20 - 50 years away from true superintelligence, and these are highly speculative timeframes. The truth is that we don’t know. However, applied AI, where we train AI on a specific task using a massive coded data set is very much here today and part of our daily lives. Most of the hot new tech that offers responses to a range of user queries involves applied AI. Specific applications include speech recognition, language translation, search, image clustering, and even automated model creation (AutoML).

Phew! Overwhelmed? You should be. AI has the to potential to solve some of society’s most complex and existential challenges from climate change to food scarcity. At the same time, it has the potential to massively disrupt employment as we know it and lead to a world where decisions are made by AI, that we don’t understand, making it impossible to audit them and determine whether they adhere to our values.

Do say: “Data policy and general AI is something to take seriously and watch out for in the future.”

Don’t say: “Skynet is coming - run for your lives!

Contact us

We would love to hear from you! Please fill out the form and the nearest person from office will contact you.

Let's reinvent the future