DeepMind’s latest AI breakthrough – first, remove human constraints

DeepMind’s latest AI breakthrough – first, remove human constraints

In February this year, I posted this about artificial intelligence and jobs:

“In short, we don’t know what we’re doing. Firstly because we don’t have the intellectual capacity or processes generally to think widely enough about the systemic impacts across our economies, and secondly because we’re close to creating something that we (pretty quickly) won’t understand anyway.”

There have been a few examples of AI creating itself already, but a striking one was just announced by DeepMind (a Google company). To quote from the BBC article:

“Google’s DeepMind says it has made another big advance in artificial intelligence by getting a machine to master the Chinese game of Go without help from human players.

The AlphaGo program, devised by the tech giant’s AI division, has already beaten two of the world’s best players.

It had started by learning from thousands of games played by humans.

But the new AlphaGo Zero began with a blank Go board and no data apart from the rules, and then played itself.

Within 72 hours it was good enough to beat the original program by 100 games to zero.”

This new version of the software started with a neural network that knew only the rules of the game, and then worked everything out by playing games against itself. The quote from the leader of the development team, David Silver, says it all:

We’ve actually removed the constraints of human knowledge and it’s able, therefore, to create knowledge itself from first principles, from a blank slate,” he said.

…and in three days rather than the years it takes to become an expert human player (who would still lose every time to the AI anyway).

It is highly unlikely that the DeepMind team know how AlphaGo is making its decisions. And when they apply this technology to other scenarios, from scientific research to making algorithm-based business decisions, they probably won’t understand those ether. And neither will we.

Again, from my post in February:

“As systems are now learning for themselves, being able to adjust their own activity and ‘thought’ patterns, how do we know where they’ll end up? Already some systems are capable of designing their own next generation. Computers don’t ‘think’ as we do, why would they end up developing along the same paths as us? As they ‘evolve’ by themselves, they will end up creating structures and processes that we never designed and don’t understand; perhaps we may not even be able (i.e. intellectually equipped) to understand them.

I’m not talking Skynet and the Terminator here, but the myriad small advances and changes that will creep into the world and its interconnected systems, accelerating towards a not-too-distant future when we won’t actually know how it all works. We really ought to be more concerned about sleep-walking into that than we seem to be.”

A brave new world indeed.