Posted on 05/29/2023 9:23:59 AM PDT by DUMBGRUNT
This professor is traveling the country with simple advice for an uncertain future: Be more human.
He says the key to survival is knowing how to solve problems—and knowing which problems to solve. He urges math nerds to focus on creativity, emotion and the stuff that distinguishes man from machine and won’t go obsolete. As artificial intelligence gets smarter, the premium on ingenuity will become greater. This is what he wants to drill into their impressionable young minds: Being human will only be more important as AI becomes more powerful.
After his talk, I asked how his message to a room full of fifth-graders applies to someone in an office, and he replied faster than ChatGPT. “The future of jobs is figuring out how to find pain points,” he said. “And a pain point is a human pain.” Loh would tell anyone what he told the students and what he tells his own three children. It’s his theorem of success. “You need to be able to create value,” he said. “People who make value will always have opportunities.”
“This machine is the world’s most powerful tool at repeating things that have been done many times before,” he tells students. “But now I want to show you something it cannot do.”
---“Is there going to be a great human-versus-robots war? The answer is, unfortunately, yes,” Loh said. “My goal is to make sure the humans win.”
(Excerpt) Read more at wsj.com ...
Do they still have presses?
Artificial intelligence isn't an all knowing oracle. Whoa!
This doesn’t make sense.
“Loh asked ChatGPT to find the largest fraction less than ½ with a numerator and denominator that are positive integers less than or equal to 10,000. It was a question that it almost certainly hadn’t seen before—and it flubbed the answer. (It’s 4,999/9,999.)“
I don’t understand why a computer (AI) would flub this answer. If ChatGPT flubbed it, then it is a poor AI engine.
It seems people are confusing these entertaining Ask Liza type chat programs with true AI - and gravely underestimating AI in the process. A true AI engine will not only learn from its mistakes, but it will do so so quickly that its trial and error process will be completely invisible to humans.
The key flaw in the above example is the phrase “It was a question that it almost certainly hadn’t seen before”. So what if it hadn’t seen it before? Even if the AI engine was stupid (and why would it be?) - at the very least, it could compare every possible fraction that fit the criteria and pick the best answer - and do so at lightening speed.
I agree with the basic premise that if humans are to compete with AI we must look for whatever advantages we may have over AI as humans - but solving a simple math problem is not going to be one of them!
—” ChatGPT flubbed it, then it is a poor AI engine.”
ChatGPT: US lawyer admits using AI for case research
https://www.bbc.com/news/world-us-canada-65735769
Similar to the guy that thought it was only flatulence.
It did not come as he thought it would.
But does that mean that AI is also racist?
And, if artificial intelligence is racist...does that mean that racism is artificial? And if racism is artificial, does that mean it's not actually real...like it's a counterfeit, sham, made-up accusation?
Uh oh.
Karen is gonna be pissed!
“but solving a simple math problem is not going to be one of them!”
The key is knowing which one to solve.
—”And, if artificial intelligence is racist...does that mean that racism is artificial? And if racism is artificial, does that mean it’s not actually real...like it’s a counterfeit, sham, made-up accusation?”
All I know about that is; Barbie said”Math is hard”,
and circulate logic tends to be fallacious.
That said you make a good case.
“Loh asked ChatGPT to find the largest fraction
= = =
I am AI, so, finding fractions is racist.
Numberphobe and all.
People have been watching too many science fiction movies. AI is simply following a program, written by a human, with a pantload of if-then-else choices. The AI is no better than the guy who wrote it and reflects the biases of the writer.
The real difference is that the AI makes the same stupid mistakes way, way faster than the human. It can look up phony, mistake ridden articles very quickly.
It is better at grammar and spelling than 95% of high school graduates, however.
The AI is only as good as the coders that wrote it. Kumar and his buddies in India.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.