What to do about AI, and The Great Asymmetry

Tim O’Reilly’s compelling post on regulating AI, ‘To understand the risks posed by AI, follow the money‘ is a must-read. O’Reilly is one of my publishing role models, the founder of O’Reilly Media and a key figure in the early conceptualisation of open-source software. Among the hundred articles you’ll see on AI today, his credibility should jump the queue.

A key takeaway is that, in trying to mitigate the potential harms of AI, rather than focusing on the technology itself and what it might be capable of, we should focus on sensible regulation. And, more importantly, that that regulation is feasible and has good precedents. We really can influence what people – and people as corporate decision-makers – are able or likely to do with their technology:

So perhaps it is time to turn our regulatory gaze away from attempting to predict the specific risks that might arise as specific technologies develop. After all, even Einstein couldn’t do that.

Instead, we should try to recalibrate the economic incentives underpinning today’s innovations, away from risky uses of AI technology and towards open, accountable, AI algorithms that support and disperse value equitably.

This emphasis on what people do with technology, rather than on the technology itself, reminded me of Stephen J. Gould’s wonderful essay, ‘The Great Asymmetry‘. In describing the great asymmetry, Gould explains:

We can only reach our pinnacles by laborious steps, but destruction can occur in a minute fraction of the building time, and can often be truly catastrophic. […] We perform 10,000 acts of small and unrecorded kindness for each surpassingly rare, but sadly balancing, moment of cruelty.’

A crowd of thousands of people stand beneath an enormous, floating black sphere. The sky is dark and menacing. This image was created with AI.

Gould argues that we obsess too easily over whether a particular scientific development is helpful or harmful, and we should focus rather on the great asymmetry. What science does is exacerbate the great asymmetry, because science makes it easier for people to destroy: ‘our particular modern tragedy resides in the great asymmetry, and the consequential but unintended power of science to enhance its effect.’

Whether or not we’re working with AI, what matters is what our work does for the great asymmetry. I find it grimly motivating to know that the daily slog – let’s not sugarcoat it, it can be a slog – of building things properly, of being kind, and of telling stories that make people better, is the steady work of shoring up the right side of the great asymmetry.