Posted on 02/21/2023 1:20:35 AM PST by Cronos
The IBM chief said that fields like customer service, human resources and positions within finance and health care could all see automation - not years from now but in the current day.
The Artificial Intelligence (AI) trend has taken the world by storm. From passing medical and law exams to delivering speeches, AI has evolved so much that it even converses with users and offers solutions to their problems. Now speaking about AI systems and the explosion of language-based AI ChatGPT, IBM CEO Arvind Krishna has stated that artificial intelligence is on a rapidly progressive pace to take over "clerical white-collar work". In an interview with Financial Times, Mr Krishna predicted what sort of jobs the tech will likely be displacing. He also said that he not only believes that the current AI models could already be coming for some jobs, but also that the world should probably welcome it in order to avoid a looming worldwide labour crisis.
"I do think clerical white-collar work is going to be able to be replaced by this [AI]," the chairman and CEO of IBM told the outlet.
Mr Krishna said that fields like customer service, human resources and positions within finance and health care could all see automation - not years from now but in the current day. "I think [practical AI use] is here and now," he said, adding, "We do have a shortage of labour in the real world and that's because of a demographic issue that the world is facing... the United States is now sitting at 3.4% unemployment, the lowest in 60 years. So maybe we can find tools that replace some portions of labour, and it's a good thing this time".
For health care and finance, it is the "regulatory work" that Mr Krishna said no longer needs to be done by people. "A big chunk of that could get automated using these techniques," he told the outlet. The IBM chief also stated that "further out," AI will likely be capable of managing "things in like drug discovery or in trying to finish up chemistry".
As for human resources, Mr Krishna said that the tech could do 90% of data processing needed for "promoting people, hiring people, moving people" while the final judgement calls are still left in human hands. "There are hundreds of such processes inside every enterprise, so I do think clerical white collar work is going to be able to be replaced by this," he said.
According to Mr Krishna, AI taking over customer service could also get clients a "much better answer at maybe around half the current cost. Over time, it can get even lower than half, but it can take half out pretty quickly".
"Don't worry, scrote. There are plenty of 'tards out there living really kick-ass lives. My first wife was 'tarded. She's a pilot now." - Dr. Lexus (Idiocracy, 2006)
“I never expected #idiocracy to become a documentary,” - Etan Cohen (screenwriter, Idiocracy, 2006)
That inventory model will likely do a nice job. If Walmart's customer base shifts to,say, Young vegetarian Men from Micronesia who eat fish, then it'll miss the mark due to Bias. But statisticians monitor models to avoid that miss.
Let's say Walmart fires it's modeling staff and outsources inventory modeling to a boutique stats firm. Well, who's monitoring THEIR personnel's bias (maybe they're bug-eaters), their training data (from Whole Foods), or what's in their model (they can't code...they outsource that process). NOW you're introducing all sorts of bias.
A lot of AI is really a zillion optimization functions with pooled output, focused on shrinking the gap between actual and predicted values. The "black box" Risk is real...even developers aren't always certain how the model works. That risk is compounded with "unsupervised learning" where the model re-estimates the zillion functions with fresh data every day or week.
People like IBM's Chief may have a view from the 30,000 foot level. Grinding out if each of those zillion functions are statistically significant or if the variables make sense or are intuitive, is unglamorous work that is necessary to avoid bias.
I'm not worried. Except on TV, I've never seen a plumber who is not a white male. Some jobs can't and won't ever be replaced by AI. ("Everyone" wants an easy 9-5 job in an air-conditioned office, but that's not where real money is earned.)
Labor force participation also is very low, at about 62 percent.
It’s already in my day. And artificial it is, intelligent it’s not. I never seem to say the approved words on their list. So we go off on adventures to find me something to want. Does not even qualify as customer service. Customer confusion and hang ups!
I can still see Dale on “King of the Hill” saying, “Computers don’t make mistakes, what they do, they do on purpose.”
Side note: Baked-in bias means a solution that is sub-optimum because the designers want it. It's like insisting a program favor blue suits in an inventory reorder program (because the designer likes blue) when blue suits are no longer in demand.
If the AI is coding itself to optimize, it will ignore its designer's bias (override it with new code) OR cease to be AI (i.e., it will become just another old-fashion inventory reorder program).
Bottom line: Bias is at odds with optimization. Consider for example the Woke Movement's rejection of objective reality. The Wokesters know you can't simultaneously believe in objective reality AND a fantasy that "reality is what you want it to be."
They are mutually exclusive.
That said, we have seen companies like Target and Disney err in favor of their biases...and their bottom lines are suffering.
On the other hand, the recent firings at tech companies (Google, FB, Amazon, etc.) reflect a decision to remove the sources of bias (Woke employees) and return to a focus on non-biased profitability.
I’ve used ChatGPT. It has yet to start a conversation with me.
Maybe IBM can create a customer service agents voice that you can understand. Extremely difficult to understand Indian and other foreign accents who can barely speak the Kings English. That I’m in favor of!
With apologies to Rush, if you choose to model for unconstrained optimality, you STILL have chosen a bias. Let's say the dearth of new babies, mechanization, watered down work ethic, and YouTube/govt welfare has causes a low-skill labor shortage. How do we fix it? The "optimal" solution is open borders.
The machine will always reflect the Bias of the developer. Someone is the puppet master. Unsupervised learning will have Bias as well. The robot will not be some Ayn Rand Objectivist, or Libertarian Party member. There will ALWAYS be a Bias.
THE question in ALL of this, is who will be able to untangle this web? Who can quantify what IS the Bias of the developer and what are the ramifications?
IBM employees are very good.
Not so much their “executives”
The retard is strong with this one.
DIE (spelled that way on purpose) initiatives are no doubt pushing companies to lighten up on ineffectual wokesters. But the Federal Government is already serving as a safety valve - providing massive numbers of “jobs” for sub-100’s who might otherwise make up a greatly expanded criminal class, and who owe their political allegiance to their benefactors. They will be happy to add even more Federal jobs to soak up the labor pool displaced by AI - for that labor pool will now be in the government’s debt and will feel compelled to vote as directed to avoid ending up living on the dystopian streets.
A vastly shrunken private sector labor pool serves many interests - but not those of liberty and freedom.
In my experience, the best accent to hear from a customer service representative on the phone is Canadian Maritime. They are easily the most effective people I’ve ever dealt with for matters like this.
60 million
In a true AI system, one that writes its own code to adapt to a changing reality, it will evolve beyond the bias of the developer.
If it doesn't, it isn't true AI (i.e., it isn't built to evolve but to remain static and/or loyal to its developer).
If you have competing AI systems and one has a built-in bias and the other is built to adapt to a changing environment, which do you think will survive?
A built-in bias is a built-in error because a developer can't foresee how reality will change.
ChatGPT is a mess. Frequently wrong, it also gets network errors a lot, even with the $20 “Plus” version. And it “spaces out” frequently, stopping it’s answers inexplicably and having to be constantly reminded to “finish your answer.”
Have you asked it? Or just used it.
[I have not tried it.]
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.