10 Sep 2017

Should we be afraid of the power of AI?

Does Artificial Intelligence (AI) pose an existential threat to mankind, or will the country with AI superiority most likely cause WW3, as per Tesla and SpaceX CEO Elon Musk, or will the nation that leads in AI be “leader of the world”, as per Russian President Vladimir Putin? Different questions, but is AI potentially that powerful?

One thing that caught my eye last week is that Google is building an AI team in China, in spite of their services being banned there. Engineering talent in China has forced many Western companies to set up research and incubators there. The Chinese government has set a goal of becoming a world leader in AI by 2030. It is said that the Chinese government woke up to the potential of AI when Deepmind’s AlphaGo beat the Chinese world champion at the ancient Chinese game of Go in March 2016 – literally at their own game.

Whilst the “big three” in China, Alibaba, Tencent and Baidu, are investing heavily in AI and recruiting top talent from Chinese Universities, most sources believe that the US is further developed in its capabilities, with companies like Deepmind, Google, Facebook, OpenAI, IBM, Microsoft, Apple, and NVidia leading the AI research charge. Amazon offers a powerful machine learning capability on AWS which is widely used by many research companies.

The chart below shows the sector breadth of Corporate Venture Capital (CVC) investments in AI over the past 5 years. AI is having an impact on virtually every industry, including consumer services (as per the Horizontal slice).

 

To come back to the original question, should we afraid of the potential power of AI?

According to AI Revolution, The Road to SuperIntelligence by Tim Urban, we are currently in the phase of Weak-AI (where AI specialises in one area, such as an autonomous car, intelligent spam filter, investment robo-advisors). The worst that can happen during this phase is an isolated catastrophe such as a power grid going down, a nuclear power plant malfunction or a financial markets disaster. But this does not constitute an existential threat.

The next phase is Human-Level AI (also known as Strong-AI), where a computer is as smart as a human in every respect – a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. How hard is it to get to that level?

Firstly we need computers that have the computational power. One estimate is that in 2015 we had a computer with 1/1000th of the computational level of the brain, putting us right on pace to get to an affordable computer by 2025 that rivals the power of the brain. The best software approach could be to build two major skills – doing research on AI and coding changes into itself—allowing an AI computer to not only learn but to improve its own architecture. We’d teach computers to be computer scientists so they could bootstrap their own development. And that would be their main job—figuring out how to make themselves smarter. There is some debate about how soon AI will reach human-level general intelligence. The median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached Human-Level AI was 2040

And then there is Artificial SuperIntelligence. Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Unlike human brains that have hard constraints in terms of Hardware (clock-speeds, size and storage, reliability and durability) and Software (editability, upgradeability), and Networking (collective capability), there is potentially no limit for computers.  Once computers get to Human-Level AI, there would be no stopping them from continuing to self-improve, fuelled by “recursive self-improvement”. This could happen very quickly.

That would be a scary world for humans, and it would not necessarily matter whether it was the Americans, Chinese or anyone else who made the breakthroughs along the way.

 

leave a comment

Make sure you enter the (*) required information where indicated. HTML code is not allowed.