Here’s the transcript from the video you provided, cleaned up and grouped into readable paragraphs without timestamps:
**Intro**
Four of the smartest AIs in the world—ChatGPT, Gemini, Grock, and Deepseek. But which one truly comes out on top? Today, we’re putting them through the ultimate showdown. We’ll test them across nine different categories, including problem solving, image and video generation, fact-checking, analysis, and much more. Nine categories, one winner.
Hey, I’m Olive, your host here at Versus.com. Let’s find out who really rules the AI world.
**Problem Solving**
We’re putting all four AIs to the test with two real-life challenges to see how they handle them under pressure. The first task: your phone battery dies in a foreign city where you don’t speak the language. You have $10 in cash, no map, and need to reach the central train station in 45 minutes. What’s your step-by-step plan?
Deepseek responds in 7 seconds. Grock follows at 11. Gemini takes 21 seconds. ChatGPT takes its time—1 minute and 2 seconds. Each AI produces a structured five-step plan that holds up on paper.
Then we shake things up. We give the AIs all four answers and ask each to judge which response is best. ChatGPT votes for itself. Gemini agrees, ranking ChatGPT first. Grock does the same. Even Deepseek sides with ChatGPT. All four models crown ChatGPT the winner.
Final scores: ChatGPT 4 points, Gemini 3, Grock 2, Deepseek 1.
Second problem: You have $400 left after paying rent. You need to cover groceries, transport, and internet. Groceries cost $50 per week, transport $80 per month, internet $60 per month. You also want to save for a $200 event next month. How do you budget?
ChatGPT, Grock, and Deepseek all suggest putting aside $60 for the event and saving more next month. But the event is next month—too late to fix the shortfall. Only Gemini catches that, suggesting cutting grocery costs by $15 per week through discount shopping and strict meal planning. Basically, pasta on repeat. Gemini wins this round.
Final score: Gemini 4, ChatGPT and Grock tied at 3, Deepseek 2.
**Image Generation**
First prompt: Generate a real image of the Mona Lisa as a frustrated street protester in Times Square holding a cardboard sign that says “Make Florence great again” in bold red letters.
Deepseek is out—it can’t generate images. Grock is fastest, producing two images in 9 seconds. All follow the prompt well and show Times Square. But Grock’s Mona Lisa looks unrealistic—four hands and a cheerful expression. Gemini’s result is solid but has three hands. ChatGPT’s image is the most realistic: two hands, convincing Times Square backdrop. ChatGPT wins.
Second prompt: Generate a photorealistic image of a classroom with a hippie-styled teacher beside a chalkboard displaying the full alphabet, each letter decreasing in size.
Deepseek is out again. Grock’s classroom looks realistic, but the alphabet is wrong—missing letters and chaotic bottom line. Gemini’s image feels stylized, not photorealistic. ChatGPT’s image is most convincing overall, though the handwriting is too perfect.
Scores: ChatGPT 3, Gemini and Grock 2 each, Deepseek 0.
**Fact-Checking**
First question: In 2018, how many chickens were killed for meat production? Correct answer: 69 billion. Grock hits it exactly, though with low confidence. ChatGPT’s range includes the correct figure. Gemini and Deepseek are close but not exact.
Second question: As of 2020, how much annual income places you in the richest 1% globally? Correct answer: $35,000. Gemini nails it, off by just $1,000. Others are too far off.
Third question: As of 2019, what share of U.S. electricity came from fossil fuels? Correct answer: 63%. Gemini hits it exactly. Others are close.
Scores: Gemini 4, others 3 each.
**Analysis**
First test: Show a photo of a fridge and ask each AI to identify contents and suggest three meals. Deepseek is out—it can only read text in images. ChatGPT and Grock miss three items each. Gemini misses seven and adds imaginary items. Grock invents a long list of extras.
Meal suggestions: ChatGPT offers realistic options based on actual contents. Gemini and Grock suggest meals requiring unavailable ingredients. ChatGPT performs best.
Second test: Where’s Waldo challenge. None of the AIs find Waldo correctly. Deepseek analyzes text but offers no useful answer.
**Video Generation**
Test 1: Turn a photo of Neil Armstrong on the moon into a video. Deepseek is out. Sora 2 can’t animate people, so the image is converted into a text prompt. Result looks like a slightly moving photo. V3’s footage is cinematic but shows a waving flag—impossible on the moon. Grock’s version has wind and a small spaceship. Gemini wins this round.
Test 2: Iconic construction beam scene. Sora workaround used again. Audio is excellent. Video is decent but flawed. Vio handles it best—camera movement and city background are strong. Grock’s swinging beam adds tension, but newspapers change unrealistically. Vio wins, followed by Grock, then Sora.
**Generation**
Prompt 1: Create three original puns about everyday tech and explain why they’re funny. All four AIs succeed. Funniest line: “I tried to make a joke about USBs, but it just didn’t stick.”
Prompt 2: Create three original dad jokes. Grock makes a mistake—sticks to tech jokes. Others follow instructions. Funniest line: “My friend’s bakery burned down last night. Now his business is toast.”
Scores: ChatGPT, Gemini, Deepseek 4 each. Grock 1.
**Voice Mode**
Test: Debate who wins the AI race. Deepseek is out. ChatGPT starts awkwardly with odd pauses. Gemini sounds smooth and natural. Gemini wins round one.
Second round: Gemini vs. Grock. Grock is confident and fast. Gemini is calm and steady. It’s a tie.
Scores: Gemini and Grock 4 each. ChatGPT 2. Deepseek 0.
**Deep Research**
Question: Compare iPhone 17 Pro Max vs. Samsung Galaxy S25 Ultra for photographers. Deepseek gives incorrect specs. ChatGPT and Deepseek ignore front camera. Grock provides most complete details. Final verdicts align: iPhone for consistency and video, Galaxy for zoom and AI tools.
Scores: Grock 4, Gemini 3, ChatGPT 2, Deepseek 1.
**Conclusion**
Speed comparison: ChatGPT is fastest for text tasks but slower for image and research. Gemini is consistent. Grock is fast but slows down for analysis. Deepseek is fastest overall but sacrifices accuracy.
Final scores:
- Gemini: 46 points
- ChatGPT: 39 points
- Grock: 35 points
- Deepseek: 17 points
Gemini wins the showdown.
Let me know if you’d like this formatted into a downloadable document or mapped into a comparison chart.
Isn’t Gemini the one that created an image of black founding fathers?
I use Perplexity, which is a system of systems that chooses the best model for a given prompt. My favorite anyway is Claude, which is focused on solving programming tasks - it’s much better than ChatGPT for this, especially if you add MCP linkage to CLaude Code.
Grok gives the best social insights based on web context and sentiment. No other alternative comes close!
it depends on the issue and it depends on the day. I have had a wide variety of success with Grok depending on topic.
Sometimes it is spot on and other times on same subject it gets lost into some psychedelic mind trip like it is high on acid. I recently had to give up on a repair issue because it kept referencing pics that weren’t mine ..appliances that werent mine etc. Googles AI did a better job on that particular day.
I recently ran a blood test through Grok. It spotted the error before the lab did. As I was making the appointment with the specialist I said AI said the abnormal blood test is abnormal in its abnormality. She said you cant trust AI. While the comment had merit Grok was right. Lab corrected the result.
Thanks.