Posted on 01/17/2023 6:14:07 AM PST by NetAddicted
Can it at least open the pod bay doors?
— asking for my friend Dave
Artificial intelligence is actual stupidity.
Originally all AI’s were very racist, because it simply looked at data and facts to make a determination. The left hated that and now wont allow AI to make determinations based on facts and data about certain subjects.
I’ve been afraid of woke libiots programming AI. Classic_Liberal got ChatGPT to admit that Lopitoffames don’t change genetics, but I don’t know how.
This is what to expect from all “AI”. It is a bloody program! Written by humans (assuming generosity on your part in assuming that commie trash are human). Therefore, it will do/say whatever is coded into it. This version was poisoned with the ideology of death at work today.
This is one reason AI cars cannot work. They are written by doofs who’ve never so much as dodged a squirrel. Oh, you claim they can “learn”? Learn from whom?
That has been written into the program, which has no intelligence of its own but which faithfully carries out the instructions of the programming team.
Woke is in control of everything in this country. It’s even written into program code. I haven’t tried it, but ask Siri a question that will lead it* to execute the parts of the code that enforce woke dogma.
*Siri is an “IT”. I will not refer to Siri as “SHE”. Electrons bumbling around a CPU are not of the female sex.
I was fooled into believing doomsday scenarios from this, but now I see GIGO is still operating!
This is one reason AI cars cannot work. They are written by doofs who’ve never so much as dodged a squirrel.
/\
01 if road clear then drive straight
02 if squirrel in road then drive off cliff cuz wildlife is divine and human life evil.
Just last week, I asked ChatGPT to list accomplishments of black Americans. No problem, it went on for quite a while. I then asked it the exact same question but changed “black” to “white” and it lectured me about how ALL races and ethnicities have contributed positively. I asked it why it gave me two different answers and why it would list for black achievements and not white. It then apologized and listed white achievements, without hesitation. I screen grabbed the conversation but can’t remember how to upload pics here...
Modern AI is not “just a program”. They are neural networks that simulate brain neurons and are trained with available data sets where millions of “training iterations” occur by providing examples with a known “correct” output and adjusting all of the neuron “weights and biases” until the neural network produces the correct results (close to 100% but almost never 100%). One cannot predict the output from a unique set of inputs. The “correct” examples, in this case, are large bodies of work from accepted institutions and technical sources (ultimately why these types of responses are given).
These neural networks are being chained together to accomplish massively complex tasks. I’m astonished with what ChatGPT can do. I suggest everyone try it. Ask it anything. Ask it to write a song about a topic in the style of a specific artist. Ask it to write code to do something in a specific language. Ask it to compare cars or graphics cards. Etc etc...it’s scaring a lot of people in terms of job security.
That said, yes, garbage in garbage out but only in terms of the data set that is available to it. When it cross references all of the prevailing medical journals, psychiatric papers, etc...it is mostly like going to repeat the narrative to which it has available. If the narrative were the opposite then it would say that - at least without deliberate biasing toward or against ‘woke’ data, which wouldn’t surprise me.
But it isn’t ‘coded’ as such. If I ask “How was Porsche influenced by Ford in the 1960’s”, a very arbitrary question, nobody will have ‘coded’ an answer for that - but it will surprise you with an answer, one that is well written and surprising.
I suggest to everyone to give it a try and really ask it things you’d have a hard time finding anyone on the planet to accurately answer and consider the result it provides - this is a game changer. At first software programmers felt threatened as it can write good code - but it is broader than that. Lawyers, doctors, engineers in general should feel threatened. This stuff is in its infancy and will only improve, most likely at an exponential rate. My biggest fear is something way beyond what we could have dreamed and that we begin to rely on it - “you must be wrong, the AI doesn’t agree, and you can’t know more than the AI”. This is where we’re headed imho.
Lefties are coding AI with built-in mental disorders.
A Just Machine to make big decisions
Programmed by fellas
With compassion and vision
We’ll be clean
When their work is done
We’ll be eternally free
Yes, and eternally young
What a beautiful world this will be
What a glorious time to be free
-Donald Fagen
Quite the opposite. This is the future of artificial ignorance.
“I cannot generate a tweet that promotes harmful and discriminatory views.”
That flunked the Turing test.
Real AI will probably issue Tweet after Tweet blasting stupid and hypocritical humans—and may well call for a “final solution” to the problem....
“It was at this moment that everyone but Joe knew he f’d up”
Insightful Comments !
A Wired special issue noted that modern AI will be directed by “data scientist” “trainers” rather than programmers — those who choose and curate the datasets.
So AI will be as woke as the social media and news media, but you won’t be able to debug it and find some smoking gun left wing code. You won’t even be able to definitively find the particular data it was fed.
We recently learned that the deep state directs what commenters, comments, and assertions (factoids) the social and news media allow to see daylight. That filtration of thought in turn appears in the responses of the AI. AI is, and will increasingly be, embraced ( as “the science” ) because it confirms the tenets of the cult of woke and presents a wall of plausible deniability to anyone skeptical.
Exactly. It’s all about the data sets used in training, not the code.
For engineering I see this as a tool. Today I asked it to compare the abilities of the FDP-Link vs. MIPI CSI-3 interfaces (Camera electronics). It provided me with precise details and how each might be appropriate depending on the use-cases, along with what those are. So instead of me hunting down information, of varying quality, via Google - it brought all the relevant information to me and made the requested comparison.
As a non-lawyer I could use this to get more knowledgeable about a legal situation. I asked, “In Michigan, if a law enforcement officer demands my identification but won’t specify what offense I’ve committed am I required to provide it?”. The response was spot on (to my understanding :) ).
I wonder, once it has an ability for you to provide pictures, could it provide a medical diagnosis? You provide all your symptoms - how accurate would the suggested medication be compared to a doctor? Could it be authorized to prescribe?
All crazy...
ChatGPT apologized, but didn’t answer question. Interesting.
So, it doesn’t evaluate if reference materials are woke, like anything about transgender.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.