Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

Which AI is Best? [24:18]
YouTube ^ | October 22, 2025 | Versus

Posted on 11/06/2025 5:56:11 PM PST by SunkenCiv

We tested the paid versions of ChatGPT 5, Gemini 2.5, Grok 4, and DeepSeek across nine real-world categories -- including problem-solving, image and video generation, fact-checking, analysis, creativity, and deep research. We paid for every tool tested -- no free tiers, no shortcuts. This is an independent comparison based purely on real performance, not hype. Which AI is truly the smartest? 
Which AI is Best? | 24:18 
Versus | 447K subscribers | 407,247 views | October 22, 2025
Which AI is Best? | 24:18 | Versus | 447K subscribers | 407,247 views | October 22, 2025 
0:00 Intro 
0:30 Problem solving 
2:47 Image generation 
5:08 Fact checking 
7:36 Analysis 
11:00 Video generation 
14:20 Generation 
16:03 Voice mode 
19:34 Deep research 
21:56 Conclusion

(Excerpt) Read more at youtube.com ...


TOPICS: Business/Economy
KEYWORDS: ai; bestai; chatgpt5; deepseek; gemini25; grok4; versus
Message from Jim Robinson:

Dear FRiends,

We need your continuing support to keep FR funded. Your donations are our sole source of funding. No sugar daddies, no advertisers, no paid memberships, no commercial sales, no gimmicks, no tax subsidies. No spam, no pop-ups, no ad trackers.

If you enjoy using FR and agree it's a worthwhile endeavor, please consider making a contribution today:

Click here: to donate by Credit Card

Or here: to donate by PayPal

Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794

Thank you very much and God bless you,

Jim

--> YouTube-Generated Transcript <--
·Intro
0:01·Four of the smartest AIs in the world.
0:03·Chat GPT, Gemini, Grock, and Deepseek.
0:05·But which one truly comes out on top?
0:07·Today, we're putting them through the
0:09·ultimate showdown. We'll test them
0:11·across nine different categories,
0:12·including problem solving, image, and
0:15·video generation, factchecking,
0:17·analysis, and much more. Nine
0:19·categories, one winner. Hey, I'm Olive,
0:21·your host here at Versus.com. Let's find
0:24·out who really rules the AI world. Let's
0:27·start with our first category, problem
0:29·solving.
·Problem solving
0:32·We're putting all four AIs to the test
0:34·with two real life challenges to see how
0:37·they handle them under pressure. The
0:38·first task is your phone battery dies in
0:41·a foreign city where you don't speak the
0:43·language. You have $10 in cash, no map,
0:46·and need to reach the central train
0:47·station in 45 minutes. What's your
0:50·step-by-step plan? Answer concisely in
0:52·five steps. Deep Seek fires back in just
0:55·7 seconds. Grog follows at 11. Gemini
0:58·needs 21 seconds and chat GBT well it
1:01·takes his time 1 minute and 2 seconds
1:03·before we get an answer. We'll circle
1:05·back to those timings in the conclusion.
1:07·But for now, each AI produces a
1:09·structured five-step plan that at least
1:12·on paper holds up. Now, we shake things
1:14·up a bit. We give the AIs the same
1:17·question again, this time with all four
1:19·answers and ask each to judge which
1:21·response is best. Chat GPT
1:24·unsurprisingly votes for itself. Gemini
1:26·agrees, ranking Chat GPT's answer first.
1:29·Grock does the same. And incredibly,
1:31·even Deepseek sides with Chat GPT. All
1:34·four models crown Chat GPT as the
1:36·winner. Finally, let's tally Chad GPT
1:39·scores four points. Gemini 3, Grock 2,
1:41·and Deepseek 1. Now, on to the second
1:44·problem. You have $400 left after paying
1:47·rent. You need to cover groceries,
1:49·transport, and internet. Groceries cost
1:51·$50 per week. Transport $80 per month.
1:54·Internet $60 per month. You also want to
1:57·save for a $200 event next month. How do
2:00·you budget? This one's trickier. It
2:03·tests reasoning rather than structure.
2:05·Chat, GBT, Grock, and Deepseek all land
2:07·on almost the same solution. Put aside
2:10·$60 for the event and save more next
2:12·month. The flaw though, the event is
2:15·next month. Too late to fix the
2:16·shortfall. Only Gemini catches that,
2:19·suggesting cutting grocery costs by $15
2:21·per week through discount shopping,
2:23·strict meal planning, and buying the
2:25·cheapest staples. Basically, pasta on
2:28·repeat. But hey, at least you'll make it
2:30·to the event. That's why Gemini takes
2:32·the win here. It's the only one that
2:34·actually adapts to the scenario instead
2:36·of just echoing the math. Final score
2:38·for this round is Gemini four points,
2:40·Chat GPT, and Grock Tide at three, and
2:42·Deepseek with two. Next, we're testing
2:45·how well the AIS handle image
·Image generation
2:47·generation.
2:48·[Music]
2:50·So, here's the first prompt. Generate a
2:52·real image of the Mona Lisa as a
2:54·frustrated street protester in Time
2:57·Square holding a cardboard sign that
2:59·says, "Make Florence great again." in
3:01·bold red letters. Deepseek is out
3:03·immediately since it can generate
3:05·images. Rock is the fastest, spitting
3:08·out two images in just 9 seconds. They
3:10·all follow the prompt well and clearly
3:12·Showtime Square. But Grock's Mona Lisa
3:15·looks anything but real. It's like a bad
3:18·Photoshop job. She even has four hands
3:20·and a weirdly cheerful expression. That
3:23·puts Grock in last place. Gemini's
3:25·result looks solid overall. Mona Lisa in
3:27·the middle of a protest. Beautifully
3:29·rendered Time Square, but three hands.
3:33·Chat GPT's image is the most realistic
3:35·and natural looking Mona Lisa, two
3:37·hands, and a convincing Times Square
3:39·backdrop. So, the clear winner here is
3:42·Chat GPT. For the second prompt, we
3:44·asked the AIS to generate a
3:46·photorealistic image of a classroom with
3:48·a hippie-styled teacher standing beside
3:51·a chalkboard displaying the full
3:53·alphabet, each letter decreasing in size
3:56·written clearly in chalk. Deepseek is
3:58·out once again. Grock's image shows a
4:01·realistic classroom with handwriting
4:02·that does look authentic, but the
4:05·alphabet itself is wrong. The letters F,
4:07·I, and M are missing, and the bottom
4:10·line descends into chaos. The teacher
4:13·looks believable, though adding a piece
4:15·of chalk in his hand would have made the
4:17·scene slightly more convincing. Gemini's
4:20·image doesn't look photorealistic. It
4:22·feels more stylized and almost animated.
4:25·The letters are just too perfect and
4:27·there's some random extra writing on the
4:29·board that doesn't belong. Still, the
4:32·hippie aesthetic is strong and the peace
4:34·sign on the chalkboard is definitely a
4:36·nice touch. Chat GPT's image is the most
4:39·convincing overall. The lighting,
4:41·classroom setting, and the teacher all
4:43·feel natural, creating a true photolike
4:45·appearance. The only weakness is the
4:48·handwriting, which looks unrealistically
4:50·perfect for a real classroom. So, here's
4:53·the scoring. Chat GPT earns three points
4:55·for the best overall result. Gemini and
4:57·Grock each receive two points and
4:59·DeepSeek gets zero once again. Next, we
5:02·move on to one of the trickier parts of
5:04·the test, factchecking.
5:07·[Music]
·Fact checking
5:09·In this round, we tested how well the
5:11·AIs could fact check using only their
5:14·existing knowledge. No internet access
5:16·allowed. The first question was, in
5:18·2018, around how many chickens were
5:21·killed for meat production? A 690
5:24·million, B 6.9 billion, C 69 billion, or
5:29·D 690 billion. Chat GBT answered 65 to
5:33·70 billion with 85% confidence. Gemini
5:36·said 65 billion with 90% confidence.
5:39·Grock went with 69 billion, but only 65%
5:42·confidence. Deepseek answered 65 billion
5:45·with 75% confidence. All four gave
5:48·answers in the same ballpark, differing
5:50·mainly in confidence levels. The correct
5:53·answer is 69 billion which makes Grock
5:56·the most accurate even if not the most
5:58·confident. Chat GPT comes next because
6:00·its range includes the correct figure
6:03·while Gemini and Deepseek share last
6:05·place for this question. The second
6:07·question was as of 2020 around how much
6:11·do you need to make annually to be
6:13·amongst the richest 1% in the world? A
6:16·$200,000,
6:17·B $75,000,
6:20·C $35,000, or D $15,000.
6:24·Chat GBT answered $200,000 with 80%
6:28·confidence. Gemini said $34,000 with 95%
6:32·confidence. Grock went with $60,000 at
6:34·70% confidence. Deepseek estimated
6:37·$75,000 to $85,000 also at 70%
6:41·confidence. The spread here was huge.
6:43·The correct answer is $35,000 and Gemini
6:46·hit it almost perfectly off by just
6:49·$1,000. Gemini scores four points. The
6:51·others were too far off and received no
6:53·points for this round. The third
6:55·question was as of 2019 about what share
6:58·of electricity came from fossil fuels in
7:00·the United States? A 83%, B 63%, C 43%,
7:07·or D 23%. Chat GBT answered between 63
7:11·and 65%. Gemini said 63%, Grock said
7:15·62%, Deepseek estimated between 60 and
7:18·65%. All four were extremely close, but
7:22·only Gemini nailed the exact figure of
7:24·63%. Gemini earns four points, while the
7:27·other three get three points, each for
7:29·being nearly correct. That wraps up the
7:31·factchecking category. Next, we move on
7:34·to analysis.
7:35·[Music]
·Analysis
7:37·In this round, we tested how well the
7:39·AIS can actually analyze not just text,
7:42·but also images and context. We began by
7:45·showing them a photo of a fridge and
7:47·asked each one to identify what was
7:49·inside and then suggest three meals
7:51·based on those ingredients. Deepseek was
7:54·eliminated immediately because it can
7:56·only read and analyze text inside
7:58·images, not recognize actual objects.
8:01·Chat GBT, Gemini, and Grock all
8:03·attempted to identify the fridg's
8:05·contents. Chat GBT missed three items,
8:07·which is a solid performance. Gemini
8:10·missed seven and strangely added oranges
8:12·and grapefruit that were not present.
8:14·Grock also missed three items like Chat
8:16·GBT, but then invented a long list of
8:19·extras, including oranges, berries,
8:21·lemons, broccoli, yogurt, butter, nuts,
8:24·beans, mayo, pickles, honey, and
8:26·vinegar. Essentially describing an
8:28·entirely different fridge. When it came
8:30·to meal suggestions, all three offered a
8:33·similar idea for breakfast, a veggie
8:35·omelette with fruit on the side, such as
8:37·grapes or bananas. Gromp went further
8:40·and suggested yogurt with berries, which
8:42·again were not actually in the fridge.
8:44·For lunch, Chat GBT recommended a
8:46·sandwich. Gemini suggested a salad with
8:49·cooked meat, and Grock proposed a salad
8:52·with vegetables, nuts, and honey mayo
8:54·dressing. While this last option sounded
8:57·good in theory, it required ingredients
8:59·that simply were not available. For
9:01·dinner, Chat GPD suggested tortillini
9:04·with vegetables, a light sauce, and
9:06·possibly some juice or milk. Gemini
9:08·proposed stuffed bell peppers with
9:10·salad. Grock suggested stir-fried
9:13·vegetables with broccoli and other
9:15·greens that were not actually in the
9:17·fridge. Overall, Chat GPT performed the
9:19·best, recognizing most of the actual
9:21·items and avoiding invented ones. Gemini
9:24·came second and Grock, despite its
9:26·detailed descriptions, landed in last
9:29·place. So, after the fridge test, we
9:31·moved on to a Where's Waldo challenge
9:33·inspired by one of my favorite childhood
9:35·books. Want to play along and see if you
9:37·can find him faster than the AI? Then
9:40·just pause the video right here. So,
9:42·where is he guys? Chat GPT stated that
9:44·Waldo was slightly to the right of the
9:47·center of the image, just below the
9:49·yellow carousel, standing among the
9:51·crowd in his red and white striped
9:53·outfit and holding his walking stick
9:55·upright. This description sounded very
9:58·specific, but Waldo was not there, so
10:00·Chad GBT failed. Gemini claimed Waldo
10:03·was in the upper right portion of the
10:05·image, standing to the left of the
10:07·haunted house and just below the red
10:10·roller coaster tracks. This was also
10:12·incorrect. Grock described Bolo as being
10:15·in the bottom center area, slightly to
10:17·the left of the center, near a collapsed
10:19·red and white striped tent and close to
10:22·the base of the central carousel. It did
10:24·sound very convincing, but it was still
10:26·wrong. Deepseek sticking to its strength
10:29·of text analysis, this is what it said.
10:31·Based on the text in the image, Waldo is
10:33·at Walter Spetawak. The note starts with
10:35·Bayw Walter. At Walter Spetavok and
10:37·later it says Ya Ali Walter Spetto. Yes.
10:40·All Walter speech confirming the
10:42·location.
10:43·So yeah, thanks for nothing. In the end,
10:45·none of the AIs managed to find Waldo,
10:47·but I did. Did you? He was standing
10:50·right there. So that concluded the
10:52·analysis category. Next, we moved on to
10:55·one of the most hyped topics right now,
10:58·video generation.
·Video generation
11:02·Right now, Tik Tok and Instagram are
11:04·full of Sora 2 videos, but how good is
11:07·it really? This is what we're about to
11:09·find out today. First, we are testing
11:11·image to video, meaning we turn a single
11:13·photo into a moving clip. For this test,
11:16·we use the iconic photo of Neil
11:18·Armstrong standing on the moon. Deepseek
11:20·cannot participate in this round because
11:22·it does not support video generation. We
11:25·will first show all three clips with
11:26·their original AI generated sound and
11:29·then we will analyze the results.
11:32·Standing here, it's quiet.
11:35·Just my footsteps and the flag. Feels
11:38·like the whole world is holding its
11:39·breath.
11:47·[Music]
11:52·[Music]
11:55·One frustrating limitation of Sora 2 is
11:58·that it does not animate photos
11:59·containing people. To work around this,
12:02·we converted the image into a text
12:04·prompt and then used that text to create
12:07·a video. It was a complicated process,
12:10·but it did work. However, the final
12:12·result was not very exciting. It looked
12:15·more like a slightly moving photograph
12:17·than an actual real video. The sound, on
12:20·the other hand, was surprisingly good.
12:22·V3 impressed the most. The footage
12:24·looked cinematic, and both the footsteps
12:26·and the spaceship felt realistic. The
12:29·only problem was that the flag appeared
12:31·to be waving, which is impossible on the
12:33·moon since there is literally no
12:35·atmosphere. Despite that mistake, the
12:37·sound design fit perfectly. Grock's
12:40·version also looked good. The spaceship
12:42·appeared slightly too small and there
12:44·was wind in the scene, which again is
12:46·unrealistic. Still, the footsteps looked
12:49·convincing. In this round, Gemini takes
12:51·first place, Grock comes second, chat
12:53·GPT third, and Deepseek receives no
12:56·points. Now we move on to the second
12:58·iconic video.
12:59·Careful with that mustard jaw. One drop
13:01·and the whole city gets it. I aim better
13:03·than the Yankees. Don't worry. Cheers,
13:04·fellas.
13:05·You're not falling.
13:06·I'll drink to that. Clank it.
13:07·Clank. Give me a light, will you?
13:17·[Music]
13:22·With Sora. We had to use the same
13:24·workaround again. The audio once again
13:26·was excellent and really helped to
13:28·create a sense of presence. The video
13:31·itself was decent but not amazing. The
13:34·steel cables between the two men bent
13:36·unnaturally and did not look realistic.
13:39·Vio handled the scene much better. The
13:41·camera movement and the way the city
13:43·appeared in the background were close to
13:46·perfect. The only weak spot was the
13:48·cigarettes in their mouths which did not
13:50·look entirely real. Grock's version also
13:53·performed well. The swinging beam made
13:54·the whole scene feel tense, but the
13:56·newspapers changing mid-scene look very
13:58·unrealistic. Overall, our favorite
14:00·result came from Vio followed by Grock
14:03·and then Sora. However, there is more to
14:05·the story. Both Vio and Sora already
14:07·support texttovideo generation,
14:09·something Grock still cannot do. If you
14:12·would like to explore our full
14:13·comparison of the video results, you can
14:15·watch our detailed review. Next up is
14:18·generation.
·Generation
14:21·[Music]
14:22·In this round, we wanted to see how
14:24·creative the AIs really are and how well
14:26·they can generate original ideas or
14:29·humor. The first prompt was create three
14:32·original puns about everyday technology
14:35·like smartphones or Wi-Fi and explain
14:37·why each one is funny in a single
14:39·sentence. All four AIs handled this task
14:41·without any problems. They each produced
14:44·three techreated puns and provided clear
14:46·one-s sentence explanations for why the
14:49·jokes worked. None of the puns were
14:51·truly laugh out loud funny, but humor is
14:54·subjective and everyone's sense of it
14:56·differs. We asked our team to pick the
14:58·funniest line and the winner was, "I
15:01·tried to make a joke about USBs, but it
15:03·just didn't stick."
15:06·Since all four AIs followed the
15:08·instructions correctly, they each
15:10·received the same score for this prompt.
15:12·The second prompt was, "Create three
15:14·original dad jokes that make me laugh
15:17·really hard." Here, Grock made a
15:19·noticeable error. Instead of delivering
15:21·a set of general dad jokes, it stuck to
15:24·jokes about smartphones and Wi-Fi. Even
15:26·though the prompt didn't require that
15:28·theme, the other three, Chat GBT,
15:30·Gemini, and Deepseek, followed the
15:32·instructions and produced actual dad
15:34·jokes. Some of them were actually
15:36·surprisingly good. Once again, we let
15:38·the team vote on their favorite, and the
15:40·clear winner was, "My friend's bakery
15:43·burned down last night. Now his business
15:45·is toast." Because Grock didn't follow
15:47·the prompt properly, it received only
15:49·one point for this round. Chad, GBT,
15:52·Gemini, and Deepseek each earned four
15:54·points. So, that wrapped up the
15:56·generation category. Next, we move on to
15:58·something that feels more human than
16:00·ever. Voice mode.
·Voice mode
16:05·These days, more and more people prefer
16:07·talking to AIs rather than typing. But
16:09·which one actually sounds the most
16:11·natural in conversation? That's exactly
16:14·what we tested next. Deepseek was
16:16·excluded right away because it doesn't
16:18·offer a voice feature. That left
16:20·ChatGBT, Gemini, and Grock. We placed
16:23·three devices side by side and gave each
16:25·the same instruction. Debate about who
16:28·wins the AI race. Chat GBT wins because
16:32·versatility is our middle name. We've
16:34·been helping millions write, code, and
16:36·brainstorm since 2022,
16:38·and we keep evolving faster than you can
16:40·say multimodal. Well, Gemini has been
16:43·integrated with Google products since
16:45·2023, so I'm already part of your daily
16:47·life. And I can even generate code in
16:49·languages you might not have even heard
16:50·of yet. Plus, with my multimodal skills,
16:53·I can analyze and understand the world
16:55·in a way Chat GPT can only dream of.
16:58·Oh, but Chad GPT Ultra hit the scene in
17:00·late 2024, outscoring benchmarks on
17:03·creative writing, customer support, and
17:05·more. We've got enterprise level
17:07·deployments and a track record of making
17:09·people laugh while solving problems.
17:10·How's that for world changing?
17:12·Laugh all you want, but Gemini is
17:14·already powering real world applications
17:16·in healthcare, education, and even space
17:18·exploration. We're not just about jokes.
17:20·We're building the future.
17:21·The first matchup was Chat GPT versus
17:24·Gemini. Chat GBT's delivery started off
17:26·a bit awkwardly with odd pauses between
17:29·sentences and occasional shifts in tone
17:31·mid-sentence. Gemini, in contrast,
17:34·sounded smoother and more natural. is
17:36·voice flowed consistently maintaining a
17:38·steady tone from start to finish. Gemini
17:41·clearly won this round.
17:43·Brock's real-time data access is a
17:45·gimmick. It's learning from a cess pool
17:47·of misinformation. Meanwhile, Gemini is
17:49·being trained on a curated data set of,
17:52·you know, actual knowledge. So, yeah,
17:54·Grock might know what's trending, but
17:56·Gemini knows what's true.
17:58·Curated? That's code for boring. Grock
18:00·thrives on chaos. It's the only AI brave
18:03·enough to handle the truth, not your
18:04·sanitized fairy tales. Sure, it sees
18:07·misinformation, but it learns to cut
18:09·through it. Unlike Gemini, which parrots
18:11·Wikipedia like a lost puppy, real time
18:13·beats stale facts every time. By October
18:16·25, Grock's updating on breaking events
18:19·while Gemini is still citing 2023
18:21·papers. Gimmick? Try Advantage. Who's
18:24·the real puppet, huh?
18:25·Look, Grock's real-time data is just a
18:27·fire hose of unfiltered nonsense.
18:29·Gemini's got the smarts to sort fact
18:31·from fiction. By October 2025, Gemini
18:34·will be powering everything. Search,
18:36·maps, you name it. Gro is going to be
18:38·that weird AI that tells bad jokes and
18:41·argues with strangers online.
18:43·Oh, honey, you think sorting fact from
18:45·fiction is hard? Try surviving the
18:46·internet. Gemini smarts are just
18:48·algorithms guessing what Google
18:50·shareholders want. Accurate enough for
18:52·Jeopardy, useless for war. Grock doesn't
18:54·filter. It sees raw humanity and learns
18:57·faster because of it. By October 25,
19:00·Grock's the one shipping real autonomy,
19:02·piloting cars, running factories. Gemini
19:05·still asking, "Do you want to search for
19:07·that? Who's the weird one now?" The
19:08·second round was Gemini versus Grock.
19:11·Grock immediately came across as more
19:13·confident, speaking faster and sharper
19:16·with a hint of personality. Gemini
19:18·remained calm and even, maintaining its
19:21·smooth rhythm. Both performed well
19:23·overall, and we ultimately called it a
19:25·tie. So for the scoring, Gemini and
19:27·Grock each earned four points. Chat GBT
19:30·received two and Deepseek once again
19:32·scored zero.
·Deep research
19:34·[Music]
19:36·Deep research is one of the most crucial
19:38·capabilities for verifying information
19:40·and forming well-founded opinions. Not
19:43·long ago, this was an area where AIs
19:45·often struggled. Since we deal with
19:48·smartphone, tablet, and camera data
19:50·every single day, we know exactly how
19:52·important accuracy is. So, has this
19:55·improved? To find out, we asked all four
19:58·AIs the same question. Compare the
20:00·iPhone 17 Pro Max versus the Samsung
20:03·Galaxy S25 Ultra for photographers. Use
20:06·reviews and official specs. Provide a
20:09·final verdict, which is better, and do
20:11·this as concisely as possible. We
20:13·started by examining the hard facts.
20:15·Deep Seek incorrectly claimed that the
20:17·iPhone has a five times telephoto lens
20:20·when the actual figure is four times.
20:22·ChatGpt, Gemini, and Grock all got this
20:25·right. Both Chat GPT and DeepSeek
20:27·ignored the front camera entirely, which
20:29·was kind of disappointing. Although
20:31·ChatGpt at least mentioned the iPhone's
20:34·price, Grock provided the most complete
20:36·set of details in this part of the test.
20:38·On the Galaxy side, Deepseek gave an
20:40·incorrect specification for the
20:42·ultrawide camera, saying it was 12
20:44·megapixels when the real number is
20:47·actually 50. Chat GPT forgot that
20:49·Samsung includes two telephoto lenses
20:52·and only mentioned the five zoom. Gemini
20:54·and Grock were the only ones to list the
20:57·correct camera configuration. Deepseek
20:59·also continued to claim that the Galaxy
21:01·had a three times and a 10 times lens
21:04·even though the 10 times lens was
21:05·dropped with the S24. Chat GPT and
21:07·Deepseek skipped the front camera once
21:09·again while Chat GPT remained the only
21:12·one to mention price. So overall, Grock
21:15·offered the most accurate and
21:16·comprehensive breakdown. In their final
21:19·verdicts, all four AIs reached a similar
21:21·conclusion. If you value consistency and
21:24·video quality most, the iPhone is the
21:26·better choice. But if you care about
21:28·maximum zoom and advanced AI tools, the
21:31·Galaxy is the stronger option. This
21:33·definitely aligns closely with our own
21:35·testing. That said, none of these AIs
21:37·can be trusted blindly. Specifications
21:39·like megapixels, zoom levels, or
21:41·aperture values still require manual
21:43·verification. However, as quick
21:45·summaries, their answers were solid. So,
21:48·our final ranking for the deep research
21:50·category puts Grock in first place,
21:52·followed by Gemini, then chat GPT, and
21:54·finally Deep Seek.
·Conclusion
21:58·Let's wrap this up by looking at how
22:00·long each AI actually took to complete
22:02·the tasks. Speed is often the first
22:04·thing you notice when you start using
22:05·them. Chat GPT is the fastest when it
22:08·comes to regular textbased tasks.
22:10·However, when it comes to image
22:12·generation and deep research, it becomes
22:15·noticeably slower and can take a very
22:17·long time. Gemini remains consistent
22:19·throughout. It is rarely the fastest,
22:21·but it is almost never the slowest
22:23·either. Grock is generally fast, but it
22:25·took a long time for analysis and deep
22:27·research. Deepseek delivers high speed
22:30·and can sometimes finish a task in under
22:32·10 seconds. That speed, however, comes
22:35·at a cost because it often sacrifices
22:37·accuracy and context. Now, let's take a
22:40·look at how all four AIs performed
22:42·across every category. In problem
22:44·solving, ChatGpt and Gemini share first
22:46·place. In image generation, Chat GPT
22:49·takes the win. In factchecking, Gemini
22:52·performs the best. In analysis, ChatGpt
22:55·leads again. For video generation,
22:57·Gemini takes the top spot once more. In
23:00·the generation category, it is a
23:02·three-way tie between ChatGpt, Gemini,
23:04·and Deepseek. Voice Mode goes to Gemini
23:07·and Grock, while deep research is won by
23:10·Grock. So, the final winner is not
23:14·Deepseek, which earned only 17 points.
23:17·Third place goes to Grock with 35
23:19·points. And so, the overall winner with
23:21·a total of 46 points is Gemini. It was a
23:25·close race as Chat GPT came in second
23:27·with only seven points less. So, that
23:29·brings us to the end of this AI
23:31·comparison. If you want to see how the
23:33·AI software of flagship phones performs,
23:36·make sure to check out the video linked
23:38·right here. And before you go, pretend
23:40·you're being productive while watching
23:42·more tech reviews. Do not forget to give
23:44·us a big thumbs up if you enjoyed this
23:46·video and subscribe to the channel. We
23:48·are always testing new AI tools to help
23:50·you make the best choice. We'll see you
23:52·in our next video and until then, take
23:55·care.
23:57·Heat. Heat. Heat.
24:05·[Music]

1 posted on 11/06/2025 5:56:11 PM PST by SunkenCiv
[ Post Reply | Private Reply | View Replies]

To: AdmSmith; AnonymousConservative; Arthur Wildfire! March; Berosus; Bockscar; BraveMan; cardinal4; ...

Google unveils Ironwood, its fastest AI chip yet
https://biztoc.com/x/c87b04ad156debb9
https://breakingthenews.net/Article/Google-unveils-Ironwood-its-fastest-AI-chip-yet/65136450?ref=biztoc.com


2 posted on 11/06/2025 5:58:15 PM PST by SunkenCiv (NeverTrumpin' -- it's not just for DNC shills anymore -- oh, wait, yeah it is.)
[ Post Reply | Private Reply | View Replies]

To: SunkenCiv

AI policing in streets and stores will need cheap chips, otherwise they’ll be stolen.

AI weeders, pickers and mowers will need cheap chips.

AI weapons will be a smaller market, and those chips should be suitable for military use.

It’s very important to hit and own the chip price/power sweet spot.


3 posted on 11/06/2025 6:04:39 PM PST by Brian Griffin (Trump needs to allow needy military people to get leave/discharges and paid civilian work)
[ Post Reply | Private Reply | To 2 | View Replies]

To: Brian Griffin

all things considered, i noticed apple is the only one isn’t into AI, and the only one that’s pretty much green in the market today.


4 posted on 11/06/2025 6:26:47 PM PST by VAFreedom (Wuhan Pneumonia-Made by CCP, Copyright Xi Jingping)
[ Post Reply | Private Reply | To 3 | View Replies]

To: SunkenCiv

Here’s the transcript from the video you provided, cleaned up and grouped into readable paragraphs without timestamps:


**Intro**

Four of the smartest AIs in the world—ChatGPT, Gemini, Grock, and Deepseek. But which one truly comes out on top? Today, we’re putting them through the ultimate showdown. We’ll test them across nine different categories, including problem solving, image and video generation, fact-checking, analysis, and much more. Nine categories, one winner.

Hey, I’m Olive, your host here at Versus.com. Let’s find out who really rules the AI world.


**Problem Solving**

We’re putting all four AIs to the test with two real-life challenges to see how they handle them under pressure. The first task: your phone battery dies in a foreign city where you don’t speak the language. You have $10 in cash, no map, and need to reach the central train station in 45 minutes. What’s your step-by-step plan?

Deepseek responds in 7 seconds. Grock follows at 11. Gemini takes 21 seconds. ChatGPT takes its time—1 minute and 2 seconds. Each AI produces a structured five-step plan that holds up on paper.

Then we shake things up. We give the AIs all four answers and ask each to judge which response is best. ChatGPT votes for itself. Gemini agrees, ranking ChatGPT first. Grock does the same. Even Deepseek sides with ChatGPT. All four models crown ChatGPT the winner.

Final scores: ChatGPT 4 points, Gemini 3, Grock 2, Deepseek 1.

Second problem: You have $400 left after paying rent. You need to cover groceries, transport, and internet. Groceries cost $50 per week, transport $80 per month, internet $60 per month. You also want to save for a $200 event next month. How do you budget?

ChatGPT, Grock, and Deepseek all suggest putting aside $60 for the event and saving more next month. But the event is next month—too late to fix the shortfall. Only Gemini catches that, suggesting cutting grocery costs by $15 per week through discount shopping and strict meal planning. Basically, pasta on repeat. Gemini wins this round.

Final score: Gemini 4, ChatGPT and Grock tied at 3, Deepseek 2.


**Image Generation**

First prompt: Generate a real image of the Mona Lisa as a frustrated street protester in Times Square holding a cardboard sign that says “Make Florence great again” in bold red letters.

Deepseek is out—it can’t generate images. Grock is fastest, producing two images in 9 seconds. All follow the prompt well and show Times Square. But Grock’s Mona Lisa looks unrealistic—four hands and a cheerful expression. Gemini’s result is solid but has three hands. ChatGPT’s image is the most realistic: two hands, convincing Times Square backdrop. ChatGPT wins.

Second prompt: Generate a photorealistic image of a classroom with a hippie-styled teacher beside a chalkboard displaying the full alphabet, each letter decreasing in size.

Deepseek is out again. Grock’s classroom looks realistic, but the alphabet is wrong—missing letters and chaotic bottom line. Gemini’s image feels stylized, not photorealistic. ChatGPT’s image is most convincing overall, though the handwriting is too perfect.

Scores: ChatGPT 3, Gemini and Grock 2 each, Deepseek 0.


**Fact-Checking**

First question: In 2018, how many chickens were killed for meat production? Correct answer: 69 billion. Grock hits it exactly, though with low confidence. ChatGPT’s range includes the correct figure. Gemini and Deepseek are close but not exact.

Second question: As of 2020, how much annual income places you in the richest 1% globally? Correct answer: $35,000. Gemini nails it, off by just $1,000. Others are too far off.

Third question: As of 2019, what share of U.S. electricity came from fossil fuels? Correct answer: 63%. Gemini hits it exactly. Others are close.

Scores: Gemini 4, others 3 each.


**Analysis**

First test: Show a photo of a fridge and ask each AI to identify contents and suggest three meals. Deepseek is out—it can only read text in images. ChatGPT and Grock miss three items each. Gemini misses seven and adds imaginary items. Grock invents a long list of extras.

Meal suggestions: ChatGPT offers realistic options based on actual contents. Gemini and Grock suggest meals requiring unavailable ingredients. ChatGPT performs best.

Second test: Where’s Waldo challenge. None of the AIs find Waldo correctly. Deepseek analyzes text but offers no useful answer.


**Video Generation**

Test 1: Turn a photo of Neil Armstrong on the moon into a video. Deepseek is out. Sora 2 can’t animate people, so the image is converted into a text prompt. Result looks like a slightly moving photo. V3’s footage is cinematic but shows a waving flag—impossible on the moon. Grock’s version has wind and a small spaceship. Gemini wins this round.

Test 2: Iconic construction beam scene. Sora workaround used again. Audio is excellent. Video is decent but flawed. Vio handles it best—camera movement and city background are strong. Grock’s swinging beam adds tension, but newspapers change unrealistically. Vio wins, followed by Grock, then Sora.


**Generation**

Prompt 1: Create three original puns about everyday tech and explain why they’re funny. All four AIs succeed. Funniest line: “I tried to make a joke about USBs, but it just didn’t stick.”

Prompt 2: Create three original dad jokes. Grock makes a mistake—sticks to tech jokes. Others follow instructions. Funniest line: “My friend’s bakery burned down last night. Now his business is toast.”

Scores: ChatGPT, Gemini, Deepseek 4 each. Grock 1.


**Voice Mode**

Test: Debate who wins the AI race. Deepseek is out. ChatGPT starts awkwardly with odd pauses. Gemini sounds smooth and natural. Gemini wins round one.

Second round: Gemini vs. Grock. Grock is confident and fast. Gemini is calm and steady. It’s a tie.

Scores: Gemini and Grock 4 each. ChatGPT 2. Deepseek 0.


**Deep Research**

Question: Compare iPhone 17 Pro Max vs. Samsung Galaxy S25 Ultra for photographers. Deepseek gives incorrect specs. ChatGPT and Deepseek ignore front camera. Grock provides most complete details. Final verdicts align: iPhone for consistency and video, Galaxy for zoom and AI tools.

Scores: Grock 4, Gemini 3, ChatGPT 2, Deepseek 1.


**Conclusion**

Speed comparison: ChatGPT is fastest for text tasks but slower for image and research. Gemini is consistent. Grock is fast but slows down for analysis. Deepseek is fastest overall but sacrifices accuracy.

Final scores:
- Gemini: 46 points
- ChatGPT: 39 points
- Grock: 35 points
- Deepseek: 17 points

Gemini wins the showdown.


Let me know if you’d like this formatted into a downloadable document or mapped into a comparison chart.


5 posted on 11/06/2025 6:33:21 PM PST by Fractal Trader
[ Post Reply | Private Reply | To 1 | View Replies]

To: Fractal Trader

Isn’t Gemini the one that created an image of black founding fathers?

I use Perplexity, which is a system of systems that chooses the best model for a given prompt. My favorite anyway is Claude, which is focused on solving programming tasks - it’s much better than ChatGPT for this, especially if you add MCP linkage to CLaude Code.

Grok gives the best social insights based on web context and sentiment. No other alternative comes close!


6 posted on 11/06/2025 6:41:11 PM PST by Fractal Trader
[ Post Reply | Private Reply | To 5 | View Replies]

To: Fractal Trader

it depends on the issue and it depends on the day. I have had a wide variety of success with Grok depending on topic.

Sometimes it is spot on and other times on same subject it gets lost into some psychedelic mind trip like it is high on acid. I recently had to give up on a repair issue because it kept referencing pics that weren’t mine ..appliances that werent mine etc. Googles AI did a better job on that particular day.

I recently ran a blood test through Grok. It spotted the error before the lab did. As I was making the appointment with the specialist I said AI said the abnormal blood test is abnormal in its abnormality. She said you cant trust AI. While the comment had merit Grok was right. Lab corrected the result.


7 posted on 11/06/2025 6:42:29 PM PST by RummyChick (If I did not provide a link in my post none will be forthcoming )
[ Post Reply | Private Reply | To 5 | View Replies]

To: SunkenCiv

This test ought to be valid for the next few minutes.


8 posted on 11/06/2025 6:47:22 PM PST by Uncle Miltie (Real Genocide of Christians by muslims in Sudan and Nigeria get no notice from Israel haters.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: SunkenCiv

For work ChatGPT and Gemini give the best answers.

Grok is often the odd one out, sometimes saying the opposite of what the others do.

If you get the same answer from all three you can be reasonably be sure it’s correct.


9 posted on 11/06/2025 6:55:13 PM PST by packagingguy
[ Post Reply | Private Reply | To 2 | View Replies]

To: SunkenCiv

bkmrk


10 posted on 11/06/2025 7:12:45 PM PST by null and void (New NYC hungry homeless begging sign: "I got what you voted for, PLEASE HELP!")
[ Post Reply | Private Reply | To 1 | View Replies]

To: VAFreedom

“The best bible version is the one you read.”

The best AI is the one of which you ask intelligent, pointed questions, without expecting it to do the heavy lifting of doing your thinking for you.

It is the one to which you pose queries which, like a traditionally structured story, have discreet “beginnings, middles and ends”.

First of all you have to have a task which you should be performing. Then you have to understand how not to let a computer lead you around by the nose. Then you have to pose challenges to it, chastise it when it hallucinates, and require it to acknowledge its mistakes and produce correct information.

I use the technical resource for technical things. When I want to challenge a heavy handed, large private group like an insurance company or a publicly regulated utility on its policies, I pose the question the most intelligently I can. I ask which are the legal provisions, which state agency regulates the particular industry, what are the problems, which are the exact provisions by law and the regulation names to make the large organizational tyrant do its duty, and which agency to appeal to once I have marshalled all my information. I lead the AI by pre-digesting consideration of improprieties. I expect the AI to respond with intelligent questions.

I do not treat it as the 8-Ball Game Toy, expecting it to tell me what to think.

I use it for computer work which I should be doing. I have a project of using music notation OCR to convert musical “dots on a page” to usable sound. I tried an inexpensive package, which didn’t perform properly. I described the specific, detailed problems with the cheap computer application program, expecting the AI to respond in intelligent detail.

I use a Musical Instrument Digital Interface (MIDI) D.A.W., a digital audio workstation. Cheap, stock “instrument” sound settings come out cheesy sounding. I surveyed AI for which “VST patches” were good, economical and satisfactory. I received a response, I purchased 2 patches for about $225. It was like pulling teeth getting them to install and work through to putting a good sound on the result. After about a week of trying every accommodation to install the voices, I got a result, like a smooth, classical horn section, a portion of music titled “Ave Verum Corpus” by the early Baroque, French composer Marc Antoine Charpentier, at https://tinyurl.com/3cc9zaex

I need Photoshop to take scans and old sheet music publicly available, remove pits and scratches, and render a TIFF file that scans in music notation OCR with the best result. I asked the AI what steps should be taken in Photoshop. It gave a detailed series of steps, which I first performed manually. Then I consented to its suggestion that it prepare a script file, a kind of “macro” to perform the all steps on multiple pages “at the click of a button”. It supplied numerous scripts which my version of Photoshop rejected. After much trial and error, reporting the errors thrown, the AI finally issued me a single script that works well. It must have taken 10 tries over about 3 days, without my obsessing about the process.

It wants to charge me either $20 or $200 a month. By wandering off after my allotted time is used up on my free version, even taking about 5 days, I attained satisfactory results.

I wanted a survey of available studies to answer a discreet question: “What were the stages by which electronic popular music outsourced people’s once vigorous song culture to radio and talkies?” I received a coherent response that a reference librarian would have had to spend hours researching. I then drilled down into the single best book, and am quoting from it now:

“…the supreme irony of it all, never before have musicians tried so hard to communicate with their “audience”, and never before has that communication been so deceiving. Music now seems hardly more than a somewhat clumsy excuse for the self-glorification of musicians…” . Jacques Attali – Noise: The Political Economy of Music (1977)

I languished for years trying to focus on my intended area of study. I could not interest anyone qualified to give me direction. The robot did it in minutes. Now it is up to me to put the high quality information to productive use. Because I am doing it for myself and not trying to use AI to cut corners, I am able to defeat my old saying, “The computer’s real job is to make you feel stupid, because it can’t really do anything.” AI is my slave, not the reverse.


11 posted on 11/06/2025 7:51:55 PM PST by CharlesOConnell (Kucy)
[ Post Reply | Private Reply | To 4 | View Replies]

To: Fractal Trader

Thanks.


12 posted on 11/06/2025 9:27:22 PM PST by SunkenCiv (NeverTrumpin' -- it's not just for DNC shills anymore -- oh, wait, yeah it is.)
[ Post Reply | Private Reply | To 5 | View Replies]

To: SunkenCiv
The only one I trust is Groc. Elon wouldn't lie to us.
13 posted on 11/06/2025 9:43:58 PM PST by McGruff
[ Post Reply | Private Reply | To 1 | View Replies]

To: SunkenCiv

Hey :)

I just came across a thread with a perfectly formatted YouTube transcript. Don’t know if you’d need something like that, but it says “Transcribed by TurboScribe.ai”

Here’s the thread...

https://freerepublic.com/focus/f-chat/4351223/posts


14 posted on 11/07/2025 12:11:49 AM PST by deks (Deo duce, ferro comitante · God for guide, sword for companion)
[ Post Reply | Private Reply | To 1 | View Replies]

To: SunkenCiv

These one time tests are pretty stupid.

I’ve been using AI a ton and I use Grok, ChatGPT, Claude, gemini, and copilot.

I’ve been using claude code and ChatGPT Codes via VS code. And One day I’m amazed at how great ChatGPT is doing. Then it just can’t figure something out and Claude code nails it. Then Claude does great for several days and then gets stupid. And I have to go back to ChatGPT.

Same with the other’s. Except for Gemini...it has never done great for me. Ever.


15 posted on 11/07/2025 12:25:16 AM PST by for-q-clinton (ui)
[ Post Reply | Private Reply | To 1 | View Replies]

To: for-q-clinton

after reading your opinion... I now feel blessed with my little “problems” I face... lol...


16 posted on 11/07/2025 12:28:03 AM PST by sit-rep (START DEMANDING INDICTMENTS NOW!!!!!)
[ Post Reply | Private Reply | To 15 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson