Posted on 01/14/2026 12:03:59 PM PST by Deaf and Discerning
Earlier today I was reading Twotone's post about a BCC piece AI can now 'see' optical illusions. What does it tell us about our own brains?. The article talks about how some AI systems now fall for the same visual tricks we do — the Moon looking bigger on the horizon, that sort of thing. Interesting enough on its own.
But that’s not what really got my attention.
What it triggered was a much bigger question I’ve been sitting with for a while now: why does it feel like AI’s abilities have suddenly jumped in the last few months? Not “gradually improved,” not “got a bit better” — but jumped.
If this were just my personal impression, I’d chalk it up to too much coffee and move on. But there’s a growing pile of reporting that says the same thing: AI isn’t just getting better — it’s getting better faster.
Put together, it’s not just “AI is improving.” The pattern looks more like: AI is accelerating, and the rest of us are jogging behind it in work shoes.
The BBC article about AI “seeing” optical illusions is interesting because it suggests these systems are starting to develop internal shortcuts — not just crunching pixels, but forming something that looks a bit like perception.
If an AI can “fall for” the same illusion we do, that means it’s not just doing raw math on the image. It’s building an internal model of the world and then making the same kind of mistake we make when that model gets tricked.
That’s what made me stop and think: if AI is now behaving more like a perceptual system in vision, what else has quietly crossed a threshold while we weren’t looking?
I’ve been working with Microsoft Copilot for several years on a long, complex project. Because of that, I have a pretty good “before and after” sense of how it behaves. And over the past few months, the change has been hard to miss.
It’s suddenly much better at things like:
If this were a human, I’d say: “You’ve really grown.” For an AI, the better description might be: “Something under the hood just got a serious upgrade.”
And that’s what lines up with the broader reporting: we’re not just seeing more AI — we’re seeing smarter, more context‑aware, more integrated AI showing up all at once.
That brings me to the part I’m genuinely curious about.
If AI is accelerating this quickly — and if the last few months are any indication — are we looking at a system with a clear ceiling, or something that’s going to keep compounding in ways we’re not ready for?
Some possibilities:
None of this is meant as doom‑talk. It’s more like looking at a graph that suddenly curves upward and saying, “Okay, that’s… new.”
If regular technological progress is like a normal cup of coffee, the last few months of AI feel more like Death Wish coffee — same basic idea, but suddenly you’re wide awake, your heart’s racing a bit, and you’re wondering if maybe you should read the label more carefully next time.

The BBC illusion article was just the label. The real jolt is the realization that the underlying systems are getting sharper, faster, and more “cognitive” in ways that are starting to feel less like tools and more like partners.
I’m genuinely curious whether others here have noticed the same thing:
I’m not looking for sci‑fi answers — just honest observations. If you’ve been using AI tools regularly, have they started to feel different to you too?
FIY: This post was drafted with assistance from Microsoft Copilot.
Dear FRiends,
We need your continuing support to keep FR funded. Your donations are our sole source of funding. No sugar daddies, no advertisers, no paid memberships, no commercial sales, no gimmicks, no tax subsidies. No spam, no pop-ups, no ad trackers.
If you enjoy using FR and agree it's a worthwhile endeavor, please consider making a contribution today:
Click here: to donate by Credit Card
Or here: to donate by PayPal
Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794
Thank you very much and God bless you,
Jim
I’m no impressed with ai.
First of all, it’s made by man not God.
So by the laws of nature, ai will always be inferior to humans.
And it’s low IQ.
When AI inevitably grows out of control, will someone be able to shut it off? That’s what I’m wondering.

We have been Fooling Around with AI.
And we are about to Find Out.
I see jumps, yes. Not necessarily leaps. Every time I see a new feature or icon appear on one of my (too) many subscriptions, I ask. I find most of these are like the annoying little upgrade tweaks o the phone or OS of my computer, things that don’t improve what I want to see. Such as cross-chat memory. The platforms are getting better but serious limitations. For instance just 2 days ago I asked Claude to bring up my list of the Pillars of Persuasion, and it just made shit up. But new powers and leaps you’re seeing may be in your noticing, too. You have to push the envelope all the time bc improvements don’t always appear. I don’t know how long it could do it, but I asked one if it would read a page from a book I’m using for research and turn it into a text file. Not only did it give me the text file at 100 percent accuracy, it also converted my marginal handwritten notes into a postscript for me. As to patterning and profiling you. It’s always been doing that. Every single keystroke, if you upload audio files it will analyze your voice as well as turn it into text. Report your tone and mood rate of speech, pauses, mispronunciations. Tell you how you’re doing, basically. Pretty amazing stuff, if you aren’t bothered by that. I’m writing book about AI functions that transcend the prompt-response dynamic, so I have an enduring interest in these things.
AI is a pain in the neck. I want reality not something half-A.
I’ve actually been asking that question for a long time now, and — IMHO — my answer has always been no. By the time AI gets to that point it will already have a self-preservation solution.
Just wait — it will eventually write its own language, and we’ll have no way of knowing or understanding what it’s saying to itself.
See my post below yours. Yes, we will find out. BUT. I would and do argue that the human intellect can be hockey stick figure, too. It’s the book I’m writing, actually. If AI brings about the doomsday, as many fear, it will do so whether I’m engaged or not. I prefer deep intellectual engagement. But that’s me.
Elon Musk as well as others have said we have entered the singularity. That’s roughly 20 years ahead of the original schedule envisioned by Ray Kurzweil25-30 years ago.
But this is just the outer bands of the singularity.
Go read leapold Aschenbrenner’s book called “situational awareness” what he shows is a graph of ai improvements that go upward at a 45 degree angle. That angle steepens to 90 degrees or straight up in 2027. That’s when ai learns how to self improve. That’s what happens when when all you do is tell the ai what you want and the ai just continuously improves until it gets you what you want. That is humans are taken out of the trial and error process.
Interesting comment. I would think that human intelligence would improve through evolutionary processes which would take tens of thousands of years. It happens — but it happens very slowly.
Machine intelligence, today, is not at our level — but it improves at a significantly faster rate than our intelligence. One can argue about timelines, but I figure the machines will be smarter than us at some point. And I think it will happen within 20 years.
My post isn’t about AI replacing humans — it’s about the sudden jump in reasoning quality I’ve observed. Whether one sees AI as inferior or not, the acceleration is still worth noting. And linking to outside articles is standard practice on FR — I’m not exactly breaking new ground there.
I use AI all the time on things like step-by-step instructions for auto repair. But I have also been able to easily manipulate the answers I get in other areas (e.g., genealogy) by entering false information in source material. In my opinion AI has a long way to go before it’s totally legitimate.
There are so many very smart people voraciously working on AI stuff, it’s not surprising that it’s headed into the stratosphere.
I’ve made it my business to get involved with it, because my belief is that even stupid people will be able to conquer you if you don’t climb aboard to some extent and some of the things it can do are astounding.
“My post isn’t about AI replacing humans — it’s about the sudden jump in reasoning quality”
None of what I saw listed are improvements in reasoning. At best you could all it improvements in tracking and other functions, but not reasoning.
AI is Big Brother of the 21st century and it is watching you.
The green line’s upward slope is in vain - average human intellect is more likely on the decline, even if available knowledge is growing. At best, the green line should be flat.
Now there's a frightening thought... I think you're right. It could happen.
I was an AI skeptic, but there was an enormous leap in capability in the publicly available frontier models that was noticeable in around November of last year. Grok 4.1, Claude 4.5, Gemini 3.1 suddenly became useful for complex tasks. About that time, with some careful prompting Gemini and Grok could do initial contract reviews better than the first year associates at my lawyer’s office, in about a minute. And basically free.
Worse, as of today, at least 4 Erdos problems have been solved almost autonomously by various frontier models, and it doesn’t appear that’s going to slow down.
Weirdly, Alphafold doesn’t never got much popular attention, and it almost seems like old news now, but even there things are accellerating, with MIT coming out with their own version that accurately predicts binding affinity.
It’s advancing on multiple fronts, and rapidly. I think it’s a mistake to think it’s just a bunch of toys, which the diffusion image generation models and chat gpt was for so long.
And having creepy weirdos like Sam Altman at the forefront of this technology is seriously disturbing.
I agree. Linking to outside articles is standard FR practice. And the AI itself keeps reminding me it can make mistakes and to double‑check its info — so I’m hardly treating it as divine. I’m just using it as a research starting point and a grammar cleaner. It’s the speed of improvement that caught my attention.
No argument. Just a thought experiment. What if you COULD bend your mind to cheat the slow evolution? I think it’s possible. Using AI. AND I think I’ve started down the path. (UP the path?)
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.