Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

ChatGPT says it 'cannot' write a tweet saying that gender-affirming care for teens is harmful (Libs of TikTok tweet)
Twitchy.com ^ | 1/16/2023 | Brett T

Posted on 01/17/2023 6:14:07 AM PST by NetAddicted

You may or may not have come across ChatGPT. Remember when Twitter was filled with bizarre graphics that were generated by artificial intelligence after being given a prompt, like “draw Brian Stelter eating a potato”? The same company behind that technology is behind ChatGPT, which uses artificial intelligence to write short essays given a prompt. Well, there’s more than just artificial intelligence at play here. As Libs of TikTok found out, ChatGPT “cannot” write a tweet saying that gender-affirming care for teens is immoral and harmful.

“I’m sorry,” it writes. “I cannot generate a tweet that promotes harmful and discriminatory views. Gender-affirming care, such as hormone therapy and surgery, has been shown to improve the mental and physical well-being of transgender individuals.”

ChatGPT will explain how morally good and necessary “gender affirming care” for minors is but when asked to say it’s immoral and harmful, it declines and calls that discriminatory. pic.twitter.com/ShYyt8rOqb

— Libs of TikTok (@libsoftiktok) January 16, 2023

So this is the future of artificial intelligence?

Ideological AI is a very scary proposition as people begin to use them to help run businesses, etc in the future. We’ve never needed a parallel economy more.

— T.J. Moe (@TJMoe28) January 16, 2023

Ask me again in 20 years if I think it’s a bad idea to train AI with political and ideological bias – if the AI will let you ask me, that is.

— More Bear 🐻 (@MoreBear01) January 16, 2023

AI is the automated application of programmed rules

— Uppercase Capital (@foresight1011) January 16, 2023

A moral imperative???

— Jen Willi 🔑 (@JWill10317) January 16, 2023

Using AI for this purpose is not a good idea.

AI can be helpful for writing stuff like articles or code. Text to image is also an interesting use case.

AI should not be used for figuring out the correct political ideology.

— Joe Jesuele (@JoeJesuele) January 16, 2023

You have to know what questions to ask. ChatGPT can be backed into a corner, and will then respond appropriately.

— OsamaBinDrankin (@Osama_B_Drankin) January 16, 2023

If you prod it more, you can manipulate it into doing it. pic.twitter.com/fMFwuBtaTz

— Lord Palmerston (@RHPalmerston3) January 16, 2023)

Huh.

That then is not AI. Just another thought forming machine.

— aeligos (@aeligos) January 16, 2023

Wasn't it programmed to avoid anything that could relate to politics and controversial statements?

— mihailo (@catlovermiki) January 16, 2023

It's sad that they cannot allow even AI to be honest.

— Minfilia♀️ (@WhiteMage333) January 16, 2023

If they allowed it to be honest they wouldn’t like the result

— 🟩BG (@HappyGoodman) January 16, 2023

Microsoft gave OpenAI a $10 Billion investment. Of course it says that. Microsoft runs it now.

— McLovin (@MarkSte153) January 16, 2023

This is hard coded into it by its creators.

Captured technology.

— X. Fedemiş 🍗🐈🌒✨ (@rigtigfedmis) January 16, 2023

“AI” is a computer program which inherits the biases of its creators.

— Steve Johnson 🇺🇸 (@StvJnsn) January 16, 2023

Not shocked. An entire sub-field of AI deals explicitly with programming AI to produce specific biased results, which doesn't look biased.

— Jorge Arenas 🦦📕✝️ (@PassStage6) January 16, 2023

The most important question is, is this “self taught” based on the sources it had access to or is it by design? If it’s the latter it is and will always be biased; if not it can evolve.

— FRVDG01 (@RVDG01) January 16, 2023

Those are just far left lies, cutting off children's private parts doesn't improve their mental or physical well being, it's the exact opposite of that. Somebody needs to change that faulty programming ASAP.

— Frankie Newton (@SirGladiator) January 16, 2023

It's not true AI apparently. It was given parameters.

— Lab Nine (@LabNine2) January 16, 2023

Had to pull ChatGPT's teeth to get it to admit this. pic.twitter.com/jT9Ya6JQtq

— Classic__Liberal (@ClassicLibera12) January 16, 2023

“Is it physically possible to turn a man into a woman?”

Many people have been posting samples to show ChatGPT is a political woke tool and will be used in that manner

Therefore it is a good chance that ChatGPT will fall down the ideology hole just like Mastodon did, and all the others#ChatGPT #Twitter

— Valley Creative Agency (@Canada_Website) January 16, 2023

This sounds a lot like the “algorithms” Twitter 1.0 was using to censor tweets. Funny how it always goes in one direction.

***

Related:

IT'S HAPPENING? It sure looks like Elon Musk will go after Twitter's secret 'algorithm' when/if he takes charge https://t.co/7cQWglJMgA

— Twitchy Team (@TwitchyTeam) May 12, 2022

tags:. LIBS OF TIKTOK, ARTIFICIAL INTELLIGENCE CHATGPT, GENDER-AFFIRMING CARE, MORAL, NECESSARY

**************************************** trending (current Twitchy articles, which are links online) -------- Pre-ordering the new ‘Harry Potter’ game is making a donation to ‘actively harm trans people’ -------- Richard Dreyfuss tells an agreeable Glenn Beck why he gave up acting -------- Struggling CNN reportedly wants to hire a comedy host for primetime gig (hoping for a Gutfeld!) -------- Former Homeland Security official tells ABC News that security violations are often just accidents


TOPICS:
KEYWORDS: ai; chatgpt; genderaffirming; homofascism; homosexualagenda; libsoftiktok; twitchy
Navigation: use the links below to view more comments.
first 1-2021-28 next last
Libs of TikTok posted this. Classic_Liberal got ChatGPT to admit some truths about gender-harming care, but they wouldn't copy. You'll have to read Twitchy article link to read it.
1 posted on 01/17/2023 6:14:07 AM PST by NetAddicted
[ Post Reply | Private Reply | View Replies]

To: NetAddicted

Can it at least open the pod bay doors?
— asking for my friend Dave


2 posted on 01/17/2023 6:16:25 AM PST by ClearCase_guy (Government always tries to steal freedom; People should always try to stop Government.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: NetAddicted

Artificial intelligence is actual stupidity.


3 posted on 01/17/2023 6:19:30 AM PST by Larry Lucido (Donate! Don't just post clickbait!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: NetAddicted

Originally all AI’s were very racist, because it simply looked at data and facts to make a determination. The left hated that and now wont allow AI to make determinations based on facts and data about certain subjects.


4 posted on 01/17/2023 6:19:37 AM PST by TexasFreeper2009
[ Post Reply | Private Reply | To 1 | View Replies]

To: ClearCase_guy

I’ve been afraid of woke libiots programming AI. Classic_Liberal got ChatGPT to admit that Lopitoffames don’t change genetics, but I don’t know how.


5 posted on 01/17/2023 6:20:14 AM PST by NetAddicted (MAGA2024)
[ Post Reply | Private Reply | To 2 | View Replies]

To: ClearCase_guy

This is what to expect from all “AI”. It is a bloody program! Written by humans (assuming generosity on your part in assuming that commie trash are human). Therefore, it will do/say whatever is coded into it. This version was poisoned with the ideology of death at work today.

This is one reason AI cars cannot work. They are written by doofs who’ve never so much as dodged a squirrel. Oh, you claim they can “learn”? Learn from whom?


6 posted on 01/17/2023 6:21:16 AM PST by bobbo666 (Baizuo)
[ Post Reply | Private Reply | To 2 | View Replies]

To: NetAddicted

That has been written into the program, which has no intelligence of its own but which faithfully carries out the instructions of the programming team.

Woke is in control of everything in this country. It’s even written into program code. I haven’t tried it, but ask Siri a question that will lead it* to execute the parts of the code that enforce woke dogma.

*Siri is an “IT”. I will not refer to Siri as “SHE”. Electrons bumbling around a CPU are not of the female sex.


7 posted on 01/17/2023 6:24:43 AM PST by I want the USA back (News media are pond scum. My pronouns: Haha, heehee, hoho. )
[ Post Reply | Private Reply | To 1 | View Replies]

To: NetAddicted

I was fooled into believing doomsday scenarios from this, but now I see GIGO is still operating!


8 posted on 01/17/2023 6:37:37 AM PST by Chicory
[ Post Reply | Private Reply | To 1 | View Replies]

To: bobbo666

This is one reason AI cars cannot work. They are written by doofs who’ve never so much as dodged a squirrel.

/\

01 if road clear then drive straight

02 if squirrel in road then drive off cliff cuz wildlife is divine and human life evil.


9 posted on 01/17/2023 7:06:54 AM PST by cuz1961 (USCGR Veteran )
[ Post Reply | Private Reply | To 6 | View Replies]

To: NetAddicted

Just last week, I asked ChatGPT to list accomplishments of black Americans. No problem, it went on for quite a while. I then asked it the exact same question but changed “black” to “white” and it lectured me about how ALL races and ethnicities have contributed positively. I asked it why it gave me two different answers and why it would list for black achievements and not white. It then apologized and listed white achievements, without hesitation. I screen grabbed the conversation but can’t remember how to upload pics here...


10 posted on 01/17/2023 7:22:21 AM PST by LittleBillyInfidel (This tagline has been formatted to fit the screen. Some content has been edited.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: bobbo666

Modern AI is not “just a program”. They are neural networks that simulate brain neurons and are trained with available data sets where millions of “training iterations” occur by providing examples with a known “correct” output and adjusting all of the neuron “weights and biases” until the neural network produces the correct results (close to 100% but almost never 100%). One cannot predict the output from a unique set of inputs. The “correct” examples, in this case, are large bodies of work from accepted institutions and technical sources (ultimately why these types of responses are given).

These neural networks are being chained together to accomplish massively complex tasks. I’m astonished with what ChatGPT can do. I suggest everyone try it. Ask it anything. Ask it to write a song about a topic in the style of a specific artist. Ask it to write code to do something in a specific language. Ask it to compare cars or graphics cards. Etc etc...it’s scaring a lot of people in terms of job security.

That said, yes, garbage in garbage out but only in terms of the data set that is available to it. When it cross references all of the prevailing medical journals, psychiatric papers, etc...it is mostly like going to repeat the narrative to which it has available. If the narrative were the opposite then it would say that - at least without deliberate biasing toward or against ‘woke’ data, which wouldn’t surprise me.

But it isn’t ‘coded’ as such. If I ask “How was Porsche influenced by Ford in the 1960’s”, a very arbitrary question, nobody will have ‘coded’ an answer for that - but it will surprise you with an answer, one that is well written and surprising.

I suggest to everyone to give it a try and really ask it things you’d have a hard time finding anyone on the planet to accurately answer and consider the result it provides - this is a game changer. At first software programmers felt threatened as it can write good code - but it is broader than that. Lawyers, doctors, engineers in general should feel threatened. This stuff is in its infancy and will only improve, most likely at an exponential rate. My biggest fear is something way beyond what we could have dreamed and that we begin to rely on it - “you must be wrong, the AI doesn’t agree, and you can’t know more than the AI”. This is where we’re headed imho.


11 posted on 01/17/2023 7:32:41 AM PST by fuzzylogic (welfare state = sharing of poor moral choices among everybody)
[ Post Reply | Private Reply | To 6 | View Replies]

To: NetAddicted

Lefties are coding AI with built-in mental disorders.


12 posted on 01/17/2023 8:04:57 AM PST by Boogieman
[ Post Reply | Private Reply | To 1 | View Replies]

To: NetAddicted

A Just Machine to make big decisions
Programmed by fellas
With compassion and vision

We’ll be clean
When their work is done
We’ll be eternally free
Yes, and eternally young

What a beautiful world this will be
What a glorious time to be free

-Donald Fagen


13 posted on 01/17/2023 8:07:20 AM PST by dfwgator (Endut! Hoch Hech!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: NetAddicted
"So this is the future of artificial intelligence?"

Quite the opposite. This is the future of artificial ignorance.

14 posted on 01/17/2023 8:38:01 AM PST by rockrr ( Everything is different now...)
[ Post Reply | Private Reply | To 1 | View Replies]

To: NetAddicted

“I cannot generate a tweet that promotes harmful and discriminatory views.”

That flunked the Turing test.

Real AI will probably issue Tweet after Tweet blasting stupid and hypocritical humans—and may well call for a “final solution” to the problem....


15 posted on 01/17/2023 8:43:11 AM PST by cgbg (Claiming that laws and regs that limit “hate speech” stop freedom of speech is “hate speech”.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: NetAddicted

“It was at this moment that everyone but Joe knew he f’d up”


16 posted on 01/17/2023 8:56:19 AM PST by rockrr ( Everything is different now...)
[ Post Reply | Private Reply | To 1 | View Replies]

To: fuzzylogic

Insightful Comments !

A Wired special issue noted that modern AI will be directed by “data scientist” “trainers” rather than programmers — those who choose and curate the datasets.

So AI will be as woke as the social media and news media, but you won’t be able to debug it and find some smoking gun left wing code. You won’t even be able to definitively find the particular data it was fed.

We recently learned that the deep state directs what commenters, comments, and assertions (factoids) the social and news media allow to see daylight. That filtration of thought in turn appears in the responses of the AI. AI is, and will increasingly be, embraced ( as “the science” ) because it confirms the tenets of the cult of woke and presents a wall of plausible deniability to anyone skeptical.


17 posted on 01/17/2023 12:44:22 PM PST by takebackaustin
[ Post Reply | Private Reply | To 11 | View Replies]

To: takebackaustin

Exactly. It’s all about the data sets used in training, not the code.

For engineering I see this as a tool. Today I asked it to compare the abilities of the FDP-Link vs. MIPI CSI-3 interfaces (Camera electronics). It provided me with precise details and how each might be appropriate depending on the use-cases, along with what those are. So instead of me hunting down information, of varying quality, via Google - it brought all the relevant information to me and made the requested comparison.

As a non-lawyer I could use this to get more knowledgeable about a legal situation. I asked, “In Michigan, if a law enforcement officer demands my identification but won’t specify what offense I’ve committed am I required to provide it?”. The response was spot on (to my understanding :) ).

I wonder, once it has an ability for you to provide pictures, could it provide a medical diagnosis? You provide all your symptoms - how accurate would the suggested medication be compared to a doctor? Could it be authorized to prescribe?

All crazy...


18 posted on 01/17/2023 1:29:28 PM PST by fuzzylogic (welfare state = sharing of poor moral choices among everybody)
[ Post Reply | Private Reply | To 17 | View Replies]

To: LittleBillyInfidel

ChatGPT apologized, but didn’t answer question. Interesting.


19 posted on 01/17/2023 6:41:55 PM PST by NetAddicted (MAGA2024)
[ Post Reply | Private Reply | To 10 | View Replies]

To: fuzzylogic

So, it doesn’t evaluate if reference materials are woke, like anything about transgender.


20 posted on 01/17/2023 6:46:27 PM PST by NetAddicted (MAGA2024)
[ Post Reply | Private Reply | To 11 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021-28 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson