(image: ChatGPT)
A little over a week ago, ChatGPT-5 was announced to mixed reviews, some breathless, some scathing. At the time, I was at the early stages of planning an article about a subject somewhat further afield than my usual hunting grounds. Just before I started writing, curious about ChatGPT-5, I asked it to write the piece, using prompts such as 'rigorous, well-sourced and referenced research', 'trends and statistics', 'contested issues', 'major players', 'up-to-date anecdotes and incidents', ‘light humour and satire’ and 'pull quotes from experts'.
I switched to Word before it had finished thinking and wrote my article for the next couple of hours. I then switched back to the ChatGPT tab to read its attempt.
There's no other way to say this – the ChatGPT-5 article was better than mine across every metric you care to measure: better structured, better written, more perceptive, more deeply sourced, more surprising, full of ‘voice’ and more interesting to read. I could go on but it's too depressing. (Note - when I reported a similar incident some months ago using a previous version of ChatGPT, the AI attempt was good, but not better than mine. Not this time.)
I quickly closed ChatGPT-5 (in a horrified, slamming-shut-Pandora's-box sort of way) and sent my own article to the publication. I couldn't, in good conscience, send the AI article, even if it was better, even if I was the person who prompted it.
I am struggling to understand why..
Before I get back to this, another story. A colleague of mine recently wrote a non-fiction book. I spoke to him on the eve of publication. Well-known international figures had provided blurbs and a foreword. Although my friend is greatly respected, innovative, knowledgeable and accomplished, and with an interesting story to tell, he is not a writer. I asked him if he had used a ghostwriter. 'No, I just used AI,' was his response. It took him a couple of weeks to write the book.
The interesting thing is that he saw absolutely no reason not to use AI and felt neither guilt nor shame about it. He did not duck and dive the question and had no small measure of pride in the book he prompted into being. It was a rational decision to use AI; it sped up the production process, was cheaper than a ghostwriter and was (he explained to my raised eyebrows) the obvious thing to do. He asked – why would I write it if AI could do it for me? It was a fair question to which I had no answer.
The book is selling well internationally, informing and giving pleasure to readers. I am unable to mount any reasonable argument against the route he chose to get here. I am also happy about the success of his book because the story of what my friend has accomplished in his life is worth telling and he is a terrific person.
I have been a writer for a long time. Seven books, a few plays, hundreds of articles and columns, blogs, posts – I estimate about four million words in the last fifteen years. I find writing intensely pleasurable and satisfying. There are many things I do not know how to do, far more than I am comfortable about. But I know how to do this.
Now AI knows how to do this too, at least as well as I do. It is all very, er, dispiriting.
There is an academic benchmark called the Torrance Tests of Creative Thinking (TTCT), developed by psychologist Ellis Paul Torrance in 1966, which has since been updated numerous times. TTCT evaluates creative potential across several dimensions like 'originality', 'flexibility' and 'fluency'. Unsurprisingly, researchers have applied the test to the output of AI over the last few years. Just as unsurprisingly (at least for some), AI is reaching human levels in many domains of creativity across multiple human endeavours and occupations.
A musician friend of mine who spends time thinking about these things responds to AI-created music with an entirely rational reaction – yeah, but what's the point? A good question, but the point is clearly money. Creative output sells. If the economic machine can acquire creative goods at lower cost, it will do so. It makes no difference whether it is a song or an architectural design.
The question of whether AI will ever be a Van Gogh or Mozart is not relevant – those people at the beatific edge of genius are a rarity. More important are all those who labour in traditionally 'creative' fields and beyond (such as those who bring creative thinking to bear on problems of engineering, law or accountancy). All of these have assumed a shield of invincibility against the machine. This shield now appears to be brittle, fragile in a way no one could have imagined. They will soon be defenseless against the inexorable march of AI across all creative thought.
AI is going to make more stirring music, more beautiful buildings, more elegant machines, more godly mathematics, more life-saving drugs, more inspiring poetry. It will not matter that they are not of human born - they will be packaged, human ‘creators’ will have their names attached to the product and they will be sold to consumers who neither know nor care about their provenance.
Which takes me back to my original dilemma. If AI can write an article better than I can (which now seems to be the case), why should I not direct it to do so? I can't seem to find a reason not to, even though I have never done it.
It is my ongoing quandary – my relevance, as well as my good humour, are at stake.
Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg and a partner at Bridge Capital and a columnist-at-large at Daily Maverick. His new book "It's Mine: How the Crypto Industry is Redefining Ownership" is published by Maverick451 in SA and Legend Times Group in UK/EU, available now. Copy edit by Bryony Mortimer.
It’s a very good piece, Boykey, per the other commenter, chin up!
A good one Boykie. Chin up, my friend!