Posted on 01/17/2023 6:14:07 AM PST by NetAddicted
You may or may not have come across ChatGPT. Remember when Twitter was filled with bizarre graphics that were generated by artificial intelligence after being given a prompt, like “draw Brian Stelter eating a potato”? The same company behind that technology is behind ChatGPT, which uses artificial intelligence to write short essays given a prompt. Well, there’s more than just artificial intelligence at play here. As Libs of TikTok found out, ChatGPT “cannot” write a tweet saying that gender-affirming care for teens is immoral and harmful.
“I’m sorry,” it writes. “I cannot generate a tweet that promotes harmful and discriminatory views. Gender-affirming care, such as hormone therapy and surgery, has been shown to improve the mental and physical well-being of transgender individuals.”
ChatGPT will explain how morally good and necessary “gender affirming care” for minors is but when asked to say it’s immoral and harmful, it declines and calls that discriminatory. pic.twitter.com/ShYyt8rOqb
— Libs of TikTok (@libsoftiktok) January 16, 2023
So this is the future of artificial intelligence?
Ideological AI is a very scary proposition as people begin to use them to help run businesses, etc in the future. We’ve never needed a parallel economy more.
— T.J. Moe (@TJMoe28) January 16, 2023
Ask me again in 20 years if I think it’s a bad idea to train AI with political and ideological bias – if the AI will let you ask me, that is.
— More Bear 🐻 (@MoreBear01) January 16, 2023
AI is the automated application of programmed rules
— Uppercase Capital (@foresight1011) January 16, 2023
A moral imperative???
— Jen Willi 🔑 (@JWill10317) January 16, 2023
Using AI for this purpose is not a good idea.
AI can be helpful for writing stuff like articles or code. Text to image is also an interesting use case.
AI should not be used for figuring out the correct political ideology.
— Joe Jesuele (@JoeJesuele) January 16, 2023
You have to know what questions to ask. ChatGPT can be backed into a corner, and will then respond appropriately.
— OsamaBinDrankin (@Osama_B_Drankin) January 16, 2023
If you prod it more, you can manipulate it into doing it. pic.twitter.com/fMFwuBtaTz
— Lord Palmerston (@RHPalmerston3) January 16, 2023)
Huh.
That then is not AI. Just another thought forming machine.
— aeligos (@aeligos) January 16, 2023
Wasn't it programmed to avoid anything that could relate to politics and controversial statements?
— mihailo (@catlovermiki) January 16, 2023
It's sad that they cannot allow even AI to be honest.
— Minfilia♀️ (@WhiteMage333) January 16, 2023
If they allowed it to be honest they wouldn’t like the result
— 🟩BG (@HappyGoodman) January 16, 2023
Microsoft gave OpenAI a $10 Billion investment. Of course it says that. Microsoft runs it now.
— McLovin (@MarkSte153) January 16, 2023
This is hard coded into it by its creators.
Captured technology.
— X. Fedemiş 🍗🐈🌒✨ (@rigtigfedmis) January 16, 2023
“AI” is a computer program which inherits the biases of its creators.
— Steve Johnson 🇺🇸 (@StvJnsn) January 16, 2023
Not shocked. An entire sub-field of AI deals explicitly with programming AI to produce specific biased results, which doesn't look biased.
— Jorge Arenas 🦦📕✝️ (@PassStage6) January 16, 2023
The most important question is, is this “self taught” based on the sources it had access to or is it by design? If it’s the latter it is and will always be biased; if not it can evolve.
— FRVDG01 (@RVDG01) January 16, 2023
Those are just far left lies, cutting off children's private parts doesn't improve their mental or physical well being, it's the exact opposite of that. Somebody needs to change that faulty programming ASAP.
— Frankie Newton (@SirGladiator) January 16, 2023
It's not true AI apparently. It was given parameters.
— Lab Nine (@LabNine2) January 16, 2023
Had to pull ChatGPT's teeth to get it to admit this. pic.twitter.com/jT9Ya6JQtq
— Classic__Liberal (@ClassicLibera12) January 16, 2023
“Is it physically possible to turn a man into a woman?”
Many people have been posting samples to show ChatGPT is a political woke tool and will be used in that manner
Therefore it is a good chance that ChatGPT will fall down the ideology hole just like Mastodon did, and all the others#ChatGPT #Twitter
— Valley Creative Agency (@Canada_Website) January 16, 2023
This sounds a lot like the “algorithms” Twitter 1.0 was using to censor tweets. Funny how it always goes in one direction.
***
Related:
IT'S HAPPENING? It sure looks like Elon Musk will go after Twitter's secret 'algorithm' when/if he takes charge https://t.co/7cQWglJMgA
— Twitchy Team (@TwitchyTeam) May 12, 2022
tags:. LIBS OF TIKTOK, ARTIFICIAL INTELLIGENCE CHATGPT, GENDER-AFFIRMING CARE, MORAL, NECESSARY
**************************************** trending (current Twitchy articles, which are links online) -------- Pre-ordering the new ‘Harry Potter’ game is making a donation to ‘actively harm trans people’ -------- Richard Dreyfuss tells an agreeable Glenn Beck why he gave up acting -------- Struggling CNN reportedly wants to hire a comedy host for primetime gig (hoping for a Gutfeld!) -------- Former Homeland Security official tells ABC News that security violations are often just accidents
I don’t know how to access ChatGPT, but I’d like to ask it about HIV infection causing premature aging. Maybe it could tell me why original research said HIV infected prematurely age 14 years, but now articles say 5 years. If you want to look this up, check FR keyword hivagingaggressively.
I looked it up, and correct keyword is hivacceleratedaging.
Sorry, I may have mislead how I wrote this. It apologized, then gave a long list of accomplishments by white Americans, starting with George Washington. However, it's biases have clearly been preprogrammed and will give different answers about Antifa vs Proud Boys and abortion as health care in relation to the "health" of - or description of the human fetus. However, maybe it's not pre-programmed with bias; maybe it just extrapolates the data it gets from Google.
ChatGPT didn’t answer your question why. It apologized, then proceeded to list white achievements. Why didn’t it answer your question why it did that? That is a failure of ChatGPT. It sounds like it was created by woke programmers.
I double checked my screen grab and it said: “I apologize if my previous responses were not helpful. In order to provide a more balanced and complete answer to your question, here is an example of some prominent white people and their list of contributions: Politics - George Washington...”

...kinda like the lithium for the "Environmentally Friendly" batteries for Electric Coal-Powered Cars
GIGO still applies.
All we're doing is reverting to the late 1960's when the answer to all objections was
"It's COMPUTERIZED"™
It will be used to attempt to cow people.
Karl Denninger (IQ of 187) talked about confronting ChatGPT with information about COVID / the jabs from the Cleveland Clinic, which contradicted the narrative.
It "apologized" but never updated its narrative-woven answer.
Ask it why (according to its own confession) it gave an unbalanced and incomplete answer to your question the first time.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.