It was perhaps the biggest bombshell in digital history: a moment when life began to seem like a scene from an apocalyptic sci-fi film.
On 22 March more than 1,800 signatories – including Elon Musk and Apple co-founder Steve Wozniak – called for a six-month pause on the development of the latest AI supersystems.
Leading scientists in the field and engineers from Amazon, DeepMind, Google, Meta and Microsoft also lent their support.
The letter said: “AI labs [are] locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
Scary stuff indeed.
The main area for concern is around what is likely to follow the current GPT-4 version of AI.
The writers allude to the likelihood of “giant” models and insist governments should step in to stop this for months if not indefinitely.
It was a sudden change in gear for the way the whole sector was publicly perceived because in the months leading up to the Musk letter, the main public take around this area of AI seemed to be that it was a novel and amusing new game.
People were playing about with the Chat-GPT platform: “write a song in the style of David Bowie about Charlton Athletic winning the FA Cup” and so on. They would try to pass off the Chat-GPT version of their own writing and see if anyone noticed; very often they didn’t.
There were news stories about people overturning parking fines with letters written by the bot.
A few journalists and others who make their living by writing were nervous about their wares being diminished. But otherwise the public were out of step with the experts on this – and were blindsided by the strength of the warning when it came.
It’s difficult to predict what will happen next. Governments have traditionally been slow if not static in responding to tech-issues and I can’t see anything changing that. And even if there were some coordinated international response to this letter it seems likely that those involved would be moving at a much slower speed than the systems they are putatively looking to pause.
If the future is unclear, we do need to come to terms quickly – very quickly – with the present. Because most of us aren’t there yet.
We now live in a world where a picture of the Pope apparently wearing a comical-looking puffer jacket can be shared millions of times in a matter of hours before most of those doing the sharing even pause to consider that it may be a deep fake.
But more pertinently is this scenario: we could, say, receive a message that looks and reads like it was written by a good friend – but wasn’t.
Until now most of the scams out there have been generic and relatively unsophisticated – essentially hundreds of different variations on the old “my uncle owned a diamond mine” classic.
In this brave new AI world, instead of an obviously fake diamond-mine owning uncle, the fraudsters could potentially contact you about your uncle who owns a dry cleaning business – if your uncle does own one. And so on.
We have now reached the point where AI means that, with just a modicum of research, the fraudsters can create bespoke, personalised messages that are much more convincing and so much more likely to cut through.
The answer is to trust the number that delivers them: the single most dependable metric by some distance for assessing the veracity of any content is and remains the data trail and status of the mobile device used in delivering it.
Many times more content is sent by mobile than any other media combined – and the vast majority of users have a long and detailed history linked to a single number
So, by using live telecom data, you get much clearer insight into who is behind a message than you can get just by reading its tone.
We aren’t yet at a point where ordinary users typically have access to these invaluable data resources, but commercial enterprises do – and can access them for peanuts in a fraction of a second.
There’s simply no excuse for leaving your workforce exposed to potential AI-driven blags and scams when they could be given a red light warning based not on the text of any message but the number that sent it.
For now this is the single most valuable way you have to protect yourself against rogue AI use.
The future will certainly bring new hazards, as the Musk letter suggests, but we will have to wait to see what they are.
Take a look at our latest white paper, Tackling Mobile Identity Fraud in Financial Services. Our our product experts are always on hand to answer any questions!