Like Donald Trump on a slow news day, Artificial Intelligence (AI) is always with us: we're each witnessing a progressive tide of "smart" devices creeping into our homes daily: everything from instinctive thermostats to AI security cameras and televisions and, yes, the ubiquitous Alexa, as well as the Google search engine you may have used to find this article, and (for the pioneering among us) ChatGPT. Today, you can open your iPhone with facial recognition technology and then use the phone to switch the oven on while you're commuting home on the train. The creepy spectres of HAL from 2001 and Terminator robots running out of control couldn't be further from the truth… railing against homely AI innovation is a lot like badmouthing motherhood and apple pie.
And anyway, AI is with us already, so isn't it better to learn to live with it and embrace the future? Behind the scenes, and away from domestic kitchens, global AI innovation is currently developing at a breathtaking pace, so odds are it will get more pervasive, not less, and we'll all have to learn to live with its escalating significance.
Well, all that's certainly true, but there are also inherent dangers in examining the Trojan horse's teeth too closely while ignoring what's inside. Just think of a few examples from more distant history: Ernest Rutherford split the atom and changed the way we looked at the world in 1918, and within twenty-five years, we had the atomic bomb (an insidious legacy we've been living with ever since). Cold cast steel gave us more railways, but it also gave us mass-produced handguns, and while television undoubtedly brightened up more than a few lives, it also gave us Jeremy Kyle and Simon Cowell.
You get the message…virtually every major social innovation has a darker heart, and it pays to be cautious in how we go about dealing with it.
This leads me to my question: is that true for AI, too, and if so, what does its darker heart look like? You'll remember earlier this year that more than a hundred or so "leading experts" (including, oddly, Elon Musk) urged a pause in research programmes to allow more considered reflection of an AI future (and just how scary it might be)…so I'm certainly not alone in asking the question.
Some of these experts have been talking with increasing stridency about what they call "Frontier AI",: meaning technologies that could potentially harm us, but we won't be able to pull the plug on. They hypothesise a "God-like' AI, and with all the low-key chutzpah of a child assembling a car bomb, counsel against "…the pursuit of innovation imposing excessive negative externalities on society" (I'm not making that up…check out the website at www.cnas.org). Machines that can replicate themselves, and then go on a killing spree?… that's definitely what I would call a "negative externality".
So, what exactly do these dark-side AI machines look like? The good folk at CNAS (the Centre for New American Security) point helpfully to a few examples: for a start, a new generation of self-guiding, self-launching biochemical weapons, and, on a (slightly) more homely level, AI image and text generation systems capable of randomly generating mass disinformation campaigns, and throwing future elections into chaos (more shades of Donald Trump there). It's all a world away from intelligent thermostats and AI-enabled toasters, but the scale of the issue obviously makes it worth thinking about carefully …and working out how we can rise to meet the challenges AI's darker heart might pose.
The obvious answer is better: more joined-up global regulation: after all, the average hot dog salesman (or woman) outside Wembley Stadium is subject to way more regulation than any team of AI boffins putting together guidance systems for a surface-to-air missile (I'm not making that up either). Better regulation doesn't necessarily mean stifling innovation, but it does mean putting some guidelines in place for the future. That's a message governments across the globe would be well advised to take seriously, and the time for taking the necessary action is now.
Let's begin to deal constructively with AI's darker heart, if only so we can rest better assured of a brighter future and look forward (as we should) to the positive benefits AI as a whole can bring to all our lives.
Like all major innovations, AI (as well as most emerging technologies) has some potential for harm, hidden in its vast potential for good. We would be foolish to ignore the complexity of that matrix and rash to set our face against better and more insightful regulation as a platform for future development.
Invest in Red Ribbon Asset Management
Red Ribbon Asset Management (www.redribbon.co) aims to harness the full potential of fast evolving and emerging technologies to meet the needs of global communities as part of a circular economy, fully recognising the compelling demands of planet people and profit.