I asked ChatGPT to format a word using different fonts. It failed. Eight straight times. I tried many phrases, assuming the problem was my prompt. I wasn't designing warp speed or hacking the DoD. Just a task that could easily be done a 14 year old.
Yet each time ChatGPT triumphantly returned a “corrected” version, the result looked like typography after two martinis. Letters wandered aimlessly. Fonts shape-shifted as if they were trying on outfits before a night out.
And every time - the same confidence: 'Here you go. Perfect.'
Reader, it was not perfect.
By attempt number six, an unsettling realization set in. Humanity has created technology capable of composing classical music, diagnosing diseases, even passing the bar exam — yet apparently formatting a single word remains aspirational.
That these systems make mistakes isn't surprising — it's the cockiness: explaining before understanding, with no self-awareness to stop. Large language models do not execute tasks the way humans do. They generate what a correct answer should look like. The illusion is impressive, until precision matters — and in business, it matters most.
AI lives comfortably in a pleasant Twilight Zone suburb where the lawns are tidy and nothing quite works.
At one point, after another misfire, the system politely suggested I hire a human designer — even pointing me toward Fiverr. Imagine a calculator recommending a slide rule because it can’t do math.
And yet the technology is undeniably remarkable. It compresses hours into minutes. It accelerates thinking.
This is where the narrative outruns reality. This is not autonomy, this is assistance — powerful, impressive, dazzling assistance — but assistance nonetheless. We are not living in the age of artificial general intelligence. We are living in the age of artificial almost. Almost correct. Almost reliable. Almost ready.
The danger isn’t that machines will take over. It’s that humans, seduced by fluency, may stop noticing the difference between sounding right and being right. Because fluency is persuasive. Confidence sells. And AI delivers both — even when accuracy is apparently optional.
So yes, let it brainstorm your strategy. Let it summarize the 80-page report everyone pretended to read. Just don’t hand it the controls — particularly for tasks requiring precision, judgment, or the radical notion that two letters in a word can have different fonts.
Someday, machines may run the world. But for now, if civilization hinges on typographic consistency, it might be wise to keep an eighth grader within arm’s reach. Because history may record that humanity created machines capable of simulating intelligence, but still needed a human to check the fonts.
Is this the opposite of The Fixx “Saved By Zero”?
Chatgpt
Format the word “helpful” with six different fonts
Here are six different font styles for “helpful” using Unicode text:
𝐡𝐞𝐥𝐩𝐟𝐮𝐥 (bold)
𝘩𝘦𝘭𝘱𝘧𝘶𝘭 (italic)
𝓱𝓮𝓵𝓹𝓯𝓾𝓵 (script)
𝗵𝗲𝗹𝗽𝗳𝘂𝗹 (sans-serif bold)
𝚑𝚎𝚕𝚙𝚏𝚞𝚕 (monospace)
𝖍𝖊𝖑𝖕𝖋𝖚𝖑 (gothic / blackletter)
But human intelligence is just that - almost.
It's like asking a question to some random dude on the street that happens to appear knowledgeable and speaks with confidence and assurance. Nevertheless, they can be right, wrong, or a bit of both.
So, it does mimic human intelligence quite well in that in can create the same errors. It just does it a lot faster.
no link?
AI always says “results can be wrong”.
Asking a left-brained technology to do a right-brained task...
I want to reside in Willoughby.