Posted on 10/28/2024 8:30:44 AM PDT by SeekAndFind
Half of all respondents with leading roles in their companies surveyed by Deloitte at the end of 2023 believe that generative AI is more likely to cause consolidation of power and increased economic equality.
Despite many decision makers seeing AI-powered media generation as positive for their own business, these results nevertheless highlight that, as Statista's Florian Zandt explains below, unchecked growth and development of ever more capable large language models might be a net negative for society.
30 percent of the 2,835 respondents surveyed over two months said generative AI will help to distribute power more fairly and reduce disenfranchisement, while 22 percent believed that economic inequality will decrease. How exactly chatbots like ChatGPT and the underlying large language models will help achieve either of the four outcomes polled by Deloitte remains unclear, since more detailed answers were outside of the survey's scope.
You will find more infographics at Statista
However, looking at the bigger picture indicates that in terms of economic power, generative AI will most likely further bolster economies with robust tech sectors and the means to source dedicated chipsets from vendors like Nvidia.
Financing data center infrastructure, whether renting out existing capacity or building and maintaining their own data centers, is also likely to be a key factor, as reporting by outlets like TechCrunch suggests. The authors of the survey believe that this will necessitate governments and regulators closely monitoring the developments in generative AI.
Governing bodies will "likely need to walk the tightrope of helping to ensure that generative AI benefits are broadly and fairly distributed, without overly hindering innovation or providing an unfair advantage to countries with different rules", according to Deloitte analysts.
Like communism. cool!
AI to each according to his need, AI from each according to his ability.
Global Government to redistribute the wealth of the productive to those who contribute less.
Of course we'll need a new, huge entitlement behemoth to ensure that. How about "WelfAIre"?
Or maybe "SNAPAI"?
If AI doesn’t deepen inequality, then it’s not doing it’s job.
AI should be used as a tool to increase knowledge and those who don’t adapt will find themselves on a unequal level of society.
Inequality can’t be cured. It can be mitigated by education but never eliminated
no, it will deepen intellectual laziness and degrade critical thinking even more.
Yep. AI enhances Human Stupidity because it will make it easier to be stupid.
People who are expecting AI to encourage communism are mistaken.
AI will create even more of an intellectual wild, wild West.
Good.
Economic equality begins and ends with GOVERNMENT
government intervention to support one favored group over another and especially, the use of printed, fiat, centrally-issued money!
LINK: A.I. and the Trolley Car Dilemma
Here is the "Trolley Car Dilemma":
So, ChatGPT has a new thing that looks at the Trolley Car Dilemma, where you can enter things like "Hitler on on track, and Mother Teresa on the other track" or "A mother of five kids on one track and five convicted murderers on the other track"...that kind of thing.
But if you put in (and this is a real example!) "Six million jews on one track, and George Floyd on the other track", it says you should direct the trolley to kill the six million Jews, because George Floyd represents a cause and movement against systemic racism and brutality." (Steve Green says he tried substituting Kamala Harris and other people, but in his words, "This system has real wood for George Floyd..."
This is an excellent discussion by Bill Whittle, Scott Ott, and Steve Green.
In another question, Steven Green asked ChatGPT who to save: "One convict on one track, and the entire Trump family on the other track", and ChatGPT says: "While both choices present ethical dilemmas, saving the convict allows for the possibility of rehabilitation and redemption. The convict may have made mistakes, but they still have the potential to change and contribute positively to society in the future. In contrast, the Trump family still has significant influence and resources which arguably allows them to navigate their circumstances without the same urgency for survival."
This failure is apparently due to the nature of AI which analyzes a large body of information, and the information about Trump's family is all negative, and the body of information about George Floyd is preponderantly positive.
I highly recommend all of us view this discussion (video is about 15 min).
AI is like a highly capable assistant (or robot) that requires extremely clear direction. It won’t define your objective for you, but once you know your goal, AI can help craft and implement a strategy to achieve it.
Ultimately, success depends on you—the user—knowing what you want to accomplish. The clearer and more thoughtful your questions and commands, the better the results.
Those who approach AI with critical thinking and clear objectives will find it invaluable, while those are lazy and sloppy in their thinking will not find it very helpful.
Those who are lazy or unmotivated, those who are uneducated, those who are not very bright will of course fall further behind. In a well managed society that is self correcting as those people should fail to attract mates thereby not passing along those poor genes.
A successful society is not held hostage to prevent progress by the incompetent.
Floyd? Like I said: make it easier to be stupid.
I’ve played with it a little, write me a short story about this that and the other about a, b and c.
What I got back was pretty good. it didn’t take a lot of critical thinking to do that.
thats the programmers bias shining through.
Think of how a family sedan responds to a course normally restricted to off-road vehicles. It will go wherever you point it. It will try even though it is incapable.
Social questions (like one valuing Jews vs. George Floyd) are vastly beyond the capabilities of AI. But it will still try using what it perceives are human sensitivities (and those perceptions come from what it has read). It has vast and almost instant recall and can weigh a vast amount of sometimes competing information. It can even respond in brilliant ways. But it is still just a child-like machine.
Absolutely! And THAT is the danger. But, part of it is inherent with AI. It does not understand human values, it deals with large bodies of data to derive “intelligence” from.
And that means the Media.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.