Posted on 03/16/2024 10:27:03 AM PDT by DallasBiff
They comprehend that 1+1=2 is racist. What more could we want?
The language models are also surprisingly confident about their wrong answers.
“So AI(artificial intelligence) cannot comprehend 1+1=2.”
Read your article!
Maybe those who programmed AI have problems with basic mathematics.
Most likely their DEI programming prevent correctly doing math.
AI reminds me of teachers’ “trick questions” that were sometimes, simply questions written poorly.
1982 SAT - https://www.scientificamerican.com/article/the-sat-problem-that-everybody-got-wrong/
What about teaching elementary and secondary students basic math?
This highlights the key flaw in the language based AI. They are all trained on the content of the internet and on social media and on vast amounts of data collected through phones and other ‘smart’ devices. The vast majority of this content has no relationship to the real world. Further, these algorithms have no means of removing logical inconsistencies within their training sets. This problem cannot be overcome without giving AI the ability to use the scientific method to create and test hypotheses against reality.
does it use the NEW MATH???
1+1=10
There are 10 kinds of people. Those who know binary and those who don’t.
Very fast!
In AI, 1 + 1 = whatever they want. That is the way science works nowadays.
AI is like the Chinese it just copies and when it copies itself it gets bizarre
Transformer based architectures for LLMs are interesting. Basically they don’t know the answers. Instead they parse the question and then, using all the data they’ve been trained on, they start guessing at the answer one character at a time. As in ‘okay let’s say the first letter is E. What is most likely to come after E?’ Eventually it decides it’s done and spits it out. Pretty complex and i don’t really understand how this works on a technical level under the hood yet. but I think it’s fair to say that it’s pretty hard to say what should come in a math answer just using by all the math problems in the world that you were trained on as examples and guessing. It just doesn’t work that way.
It depends on the question. I try to avoid using them, but I’ve seen threads where the poster challenged the AI then the AI changed its answer.
I’m guessing if the question had been asked the other way and the answer were challenged, the AI would have changed its mind the other way.
“Math is racist.
Therefore I refuse to answer your racist question.”
So spaketh the great AI.
I see Scientific America now supports fake women.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.