Robots taking jobs matters, but a bigger point is that we don’t know what we’re doing

Robots taking jobs matters, but a bigger point is that we don’t know what we’re doing

Much of the discussion about the ‘robot revolution’ today is centring around the potential for mass unemployment, and what should be done about it. The existing commentary correctly (in my opinion) focuses on how to handle the aftermath rather than how to control it, given that the introduction of such technologies is all but inevitable. But focusing on jobs is missing the key point about AI.

In short, we don’t know what we’re doing. Firstly because we don’t have the intellectual capacity or processes generally to think widely enough about the systemic impacts across our economies, and secondly because we’re close to creating something that we (pretty quickly) won’t understand anyway.

Much of the media discussion about the ‘robot revolution’ is focusing on whose jobs are at risk. ‘Can your job be done by a robot?’ etc.  I understand that this sort of simplicity is required in the media, but thinking about the impact on individuals when they lose their jobs is very narrow. What will the accumulation of those job losses do to the wider fabric of society? How many other jobs in the supply chain/service sector depend on that job, and that person having an income?

As jobs are pared away, people’s daily lives will change; they won’t be commuting into work; they won’t be going out for coffee/lunch where they (used to) work; business and vacation travel will reduce, personal elective spending will go down in line with incomes and confidence, and so on. There will therefore be significant and multi-faceted knock-on effects across the economy. Will we need the same capacity of public transport? Will the same number of coffee shops by economically viable? Or retailers in general?  Will such service businesses change location?  Will anybody buy a suit (male or female)? What will happen to those sectors that suffered a significant downturn as a result? What will happen to commercial rents and rates in city centres? How will local government services need to change? And so on, indefinitely.

Given that we don’t seem capable of systemic thought today about relatively simple, bounded, things (for example how aspects of our cities should be designed – public spaces, integrated transport, etc.) – will we start doing so in the much wider context of economy-wide changes brought about by mass unemployment? I don’t think so.

Perhaps more concerning in the longer term is that we don’t understand how AI will develop, and what it will end up being capable of, and therefore doing. It’s pretty easy to be impressed by the accelerating advances in AI and learning systems, but the old saw that computers can only do what you programme them to do (originally dreamed up by IBM in the 1950s to reassure managers that their jobs were safe) is already untrue (not least because of IBM’s own Watson). We already live in an algorithm-driven world, whether that’s Amazon suggesting things you might want to buy, variable phasing traffic lights, or high-frequency stock trading systems. And we’ve already experienced what happens when they go wrong, such as the ‘flash crash’ that wiped $1tn off stock prices for a little while on May 6th 2010.

As systems are now learning for themselves, being able to adjust their own activity and ‘thought’ patterns, how do we know where they’ll end up? Already some systems are capable of designing their own next generation. Computers don’t ‘think’ as we do, why would they end up developing along the same paths as us? As they ‘evolve’ by themselves, they will end up creating structures and processes that we never designed and don’t understand; perhaps we may not even be able (i.e. intellectually equipped) to understand them.

I’m not talking Skynet and the Terminator here, but the myriad small advances and changes that will creep into the world and its interconnected systems, accelerating towards a not-too-distant future when we won’t actually know how it all works. We really ought to be more concerned about sleep-walking into that than we seem to be.