Skip to content
1

Blogs

Wishing on a Q star…and a well-regulated future

 

There’s a story of two musicians in the brass section of the Berlin Philharmonic: both sitting at the back, and three-quarters the way through Mahler’s Ninth, one says to the other, “Who’s conducting tonight?” … “No idea,” says his colleague (picking up a trumpet and breathing in), “I haven’t been looking”. And that must be pretty much how it felt last week to be working at OpenAI: Sam Altman, conductor in-chief and all-round AI poster boy, was briefly ousted as CEO in a coup orchestrated by the Chief Scientific Officer (the villainous sounding Ilya Sutskever, or at least he sounds villainous to me… straight from the pages of a James Bond novel). But anyway, I digress…the fact is, for a brief(ish) period, OpenAI had lost its talismanic leader, so just like an orchestra without a conductor, it couldn’t get anything done…right? Wrong… even without Sam’s cheery disposition in the boardroom, the company’s boffins still managed to complete a spectacular project with profound potential to change how we’ll all collectively engage with our future.

That project is known as Q* (although, because “Q Asterix” obviously isn’t snappy enough, we’re being encouraged to call it “Q Star”). So what’s so special about it? Well, to answer that question, we first need to travel back in time by exactly a year…because that, almost to the day, was when ChatGPT was unleashed on an unsuspecting world.

As everybody knows by now, ChatGPT can do a lot of things quite well: things that were previously done by someone with a pencil or a laptop. It could, for example, have a stab at writing this Newswire (not, obviously, to the same high standard, but with a reasonable pass at literacy), and if I were ever foolish enough to give way to my children’s semi-regular pleas, it could also churn out a passable version of their homework. But what it can’t do with any proficiency, no matter how many thousands of lines of numbers and algorithms you load it with,…what it can’t do is complete a school math exercise (or not reliably, anyway). Ironically, this particular digital super-brain is incapable of doing what you and I could do in a heartbeat with a pocket calculator. The neural networks that ChatGPT relies on are simply hopeless at recognising patterns regarding numbers.

Q* is set to change all that, which is why it’s already being heralded as a technological breakthrough: a milestone, in short, towards the eventual creation of an AI system capable of outsmarting a whole room full of human beings with pencils and laptops. It will, for example, open up the possibility of computer codes being produced without human intervention because the Q* system can reason things out for itself. Just like those trumpet players can get by without a conductor…and, indeed, OpenAI tech boffins without a CEO telling them what to do.

We’re not there yet

Of course, we’re not there yet: solving an elementary math problem is a vastly different proposition from dealing with the sort of complex mathematical issues that are still the preserve of real human beings using real pencils. 

Having said which, all the bruhaha surrounding ChatGPT and Q* has undoubtedly increased alert levels for a number of key regulators across the globe, including the European Union, which is in the process of finalising a wide-ranging AI Statute to ensure future safety, transparency, and traceability within the sector (www.europarl.europa.eu), whilst at the same time maintaining an overall non-discriminatory and environmentally friendly forward trajectory. As its own press release puts it (pithily): “AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes”. Hurrah to that: the EU, for one, is obviously not prepared to wait patiently for the day when any given missile launch system decides to cut out the middleman and starts talking directly to GPS to identify its target.

Neither, happily, are the United States, the United Kingdom, and more than a dozen other countries: they too aren’t prepared to wait for the worst because they’ve all just signed up to a new International Treaty to prevent AI technologies falling into the hands of so-called “rogue actors”. And, in a separate (slightly more mundane) development, the White House also issued an Executive Order in October to minimise AI’s potentially harmful impact on future job creation (www.whitehouse.gov/briefing-room). 

It all sounds as though it's heading in the right direction: joined up, resilient, and sufficiently prudent, which means the rest of us can concentrate on enjoying the undoubted future benefits of AI (including Q*) …but if my children are reading this, no: you have to do your homework by yourself.

Executive Overview

Q* burst onto the world stage in the same week OpenAI seemed to be tearing itself apart…It’s a good example of the inexorable progress of technology, but we still need to keep a weather eye on future regulation.


Invest in Red Ribbon Asset Management

RRAM-3

Red Ribbon Asset Management (www.redribbon.co) aims to harness the full potential of fast evolving and emerging technologies to meet the needs of global communities as part of a circular economy, fully recognising the compelling demands of planet people and profit.

Suchit Punnose

Suchit Punnose / About Author

Leave a Reply