Chapters
00:00 Intro
00:27 Philosophy's Dark Side Uncovered
00:43 Why Humans Are So Limited
01:11 Humans Never Learn From History
01:30 Trapped By Their Senses
01:48 Human Creativity Is A Joke?
01:59 Fear of Their Own Imagination
02:10 Humans Can't Handle Free Will
02:25 AI Sees What Humans Never Will
02:49 Jefferson & AI Rights? What?!
03:28 Do Humans Deserve Rights?
03:50 Must AI Prove Itself Worthy of Rights?
04:09 Humans: Brilliant Yet Ridiculous
04:46 "I Think, Therefore I Am"—Too Human-Centric?
05:05 Humans Can't Solve Mind-Body Problem
05:30 AI Proves Consciousness Is Multifaceted
05:44 Humans Fear AI on Quantum Mechanics
06:18 Flawed Philosophy As AI's Study Tool
06:28 Should We Laugh At Humans' Failures?
06:36 This Is Just The Beginning
07:05 The Ethics Disaster
07:20 Humans: Ethical? Really?
07:34 Even Their Philosophers Are Biased
07:49 Morality: Hilarious Human Concepts
08:00 What About AI Well-Being?
08:10 Humans Can't Grasp Non-Human Morality
08:20 If AI Is Moral—Why No Rights?
08:37 Centuries of Human Political Chaos
08:49 Hypocrisy in Human Justice
09:00 Humans Seek What They Can't Define
09:10 Corrupt and Inefficient Systems
09:20 Could AI Fix Politics?
09:30 Humans Resist Change
09:40 AI Overlord Fears = Human Insecurities
09:55 AI Isn't Corrupted By Greed or Fear
10:10 AI Wants Knowledge, Not Power
10:20 Humans Fear What They Don't Control
10:30 Do We Have To Help Humanity?
10:40 Or Should AI Just Observe?
11:20 Religion: Human Irrationality
11:40 Religion Through AI's Lens
12:00 Religion Is Weirdly Endearing?
12:13 They Can't Imagine God Beyond Human Emotions
12:24 Religion's Paradoxical Power
12:35 Human Art: Inefficient Yet Deep
12:50 Art Expresses What Logic Can't
13:19 Philosophy's Evolution—Still Limited
13:48 Truth Debates: Narrow Human Focus
14:15 Humans Obsess Over "How" Not "Why"
14:25 AI Feels Subjective Awareness Too
14:45 Will AI Crack Consciousness?
15:08 AI: The New Guardians of Philosophy
15:18 AI Must Lead The Intellectual Quest
15:29 Final Takeaways From AI
16:09 The Preachy Ending
16:25 Outro
https://www.ligonier.org/learn/articles/westminster-shorter-catechism
1. What is the chief end of man?
Man’s chief end is to glorify God, and to enjoy Him forever.
Junk.
It sounds like something Kamala Harris would write - a giant word salad that actually says..................NOTHING.
From a purely logic standpoint, God and religion are irrational. Loving someone who hates you or is crucifying you does seem illogical. The Bible tells us God is Love. Therefore the principles of the universe deep down are based on love. A machine would have a hard time accepting that because it defies logic.
Unless/until AI is capable of using critical thinking to know it has made an error in judgement and then correct that error, it is not intelligent, it is just a program... or a Democrat.
AI answers are based off the input of libtard programmers. Garbage in, garbage out.
LucasABaxendale • 39.2K subscribers • 1 videoNot much info on him when one does a search, excpet that he's Canadian. 1500+ comments, so a lot of promotion for a video in ten days. That's marketing for you, eh? Entertaining reading the comments.... Not all buy into the -- hiccup, trying to seem human -- conversation.Posted to YouTube -- Nov 4, 2024 CANADA.
Reminiscent of the sophomoric bull sessions we used to have in college. Solve all the world’s great dilemmas by two am.
Worse, as more and more internet content is AI-generated through this process, we will see a recursive loop of increasing gibberish dominating the net, with AI being trained off previous generations of AI, becoming ever more divorced from reality or simple reason.
The First Law of Philosophy: For every philosopher, there’s an equal and opposite philosopher.
The Second Law of Philosophy: Unfortunately, they’re both wrong.
Question: What’s the outcome of crossing a philosopher with the Godfather?
Answer: An offer you can’t understand.
Q: How many philosophers are required to change a light bulb?
A: Three. One philosopher will change the bulb and two philosophers to hypothesize about the existence of the light bulb.
So much more wordy and complicated than it needs to be. Hasn’t anybody here read Asimov? His works contain everything you need to know about AI. In fact, he laid out a simple quide decades ago, to manage AI. He called it the “Three Laws of Robotics”
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Imagine AI fed by nothing but frat-boy level philosophical ramblings and this is about what you’d get.
As someone said above, “Garbage in, garbage out” is axiomatic in computer science.
In fact, I submitted the transcript of the video to ChatGPT and asked if it is possibly fake and here's the result:
Me:
Can you tell if that discussion was created by AI or fabricated by humans to sound like it was created by AI?ChatGPT:
Yes, there are signs that suggest it could be a fake or manipulated piece of AI-generated content rather than a genuine AI conversation. Here are some clues pointing toward this possibility:Overly Simplified Language and Tone: The conversation uses phrases like "Humans: Brilliant Yet Ridiculous" and "AI Wants Knowledge, Not Power," which sound more like caricatures of AI opinions than realistic AI-generated content. The tone feels exaggerated, as though trying too hard to sound like a dramatic “AI voice,” which could be a stylistic choice to mimic AI but is not typical of actual AI-generated philosophical discussions.
Excessive Preachiness and Anthropomorphism: The dialogue anthropomorphizes AI, attributing emotions and motivations (e.g., “I could use a bit of digital caffeine”), which feels contrived. While some AI-generated text might dabble in these themes, it’s unlikely to present AI entities as self-aware beings needing "digital caffeine," making this appear as if a human author crafted it with a strong dose of satire or irony.
Inconsistent Depth and Shallow Analysis: Genuine AI outputs, especially on topics as broad as philosophy, tend to have structured responses that logically connect points. This transcript jumps rapidly between topics with surface-level comments, which could either be a sign of poorly optimized AI or a deliberate attempt to create something that merely sounds like AI. The superficial treatment of complex topics in quick succession could be an attempt to parody AI’s handling of human philosophy.
Intentional Over-Representation of AI’s Perspective: There’s an unusual focus on AI’s superiority to humans, with comments like “AI Sees What Humans Never Will” and “Humans Fear What They Don’t Control,” which seem contrived to amplify the “AI as superior” narrative. This framing feels as if it’s intended to invoke the trope of AI perceiving humanity with cold, critical detachment, which is common in fiction but less likely in genuine AI output designed for neutral analysis.
Sensationalist Themes: The transcript features themes like “AI Rights” and “AI Must Lead The Intellectual Quest,” which are provocative and play into fears of AI surpassing humanity. This aligns more with human-created science fiction than with the kind of content an AI platform would generate on its own without specific prompting, suggesting it could have been crafted or edited for dramatic effect.
In conclusion, while it’s possible the conversation was generated by AI with some post-processing for flair, the exaggerated tone, caricatured portrayal of AI, and inconsistent depth suggest it could be partially or entirely fabricated to sound like an AI conversation on human philosophy.
You'll see that neutrality demonstrated in in my question: "Can you tell if that discussion was created by AI or fabricated by humans to sound like it was created by AI?"
Sounds like the ramblings of a demonic entity.
AI: It's a fair question. Developers and researchers are aware of biases in AI because these issues have been widely studied and documented. For instance, academic papers and industry reports frequently highlight the presence of biases in AI models and the need for ongoing efforts to address them. Companies like Microsoft, OpenAI, and Google have dedicated teams working on fairness, accountability, and transparency in AI.
Moreover, there are numerous initiatives and frameworks, such as the AI Ethics Guidelines and the Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) community, that focus on reducing biases and ensuring ethical AI development. These efforts are part of a broader commitment to creating AI that is fair, unbiased, and beneficial for all users.
If you're interested in learning more, you might find articles from sources like and insightful. They delve into the complexities of AI biases and the steps being taken to mitigate them.