Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

ChatGPT, the AI Chatbot, just passed a Wharton MBA exam. Now what?
Hotair ^ | 01/24/2023 | Jazz Shaw

Posted on 01/24/2023 7:36:48 PM PST by SeekAndFind

All of the ongoing buzz around OpenAI’s ChatGPT chatbot continues to increase in volume as people discover new and more inventive ways to use it. But some of it is taking on an increasingly darker tone. As the bot continues to expand its language base, becoming more and more “humanlike” in its responses and accurate in the material it generates, it’s becoming clear that the technology may be reaching the point where it is outgrowing its makers. The most recent example turned up when the University of Pennsylvania tasked ChatGPT to take a final exam in a core course from Wharton’s MBA program, Operations Management. It’s a daunting challenge for many of even the brightest postgraduate students. But ChatGPT not only passed the exam with an impressive score but did it in a very short period of time.

This week, Terwiesch released a research paper in which he documented how ChatGPT performed on the final exam of a typical MBA core course, Operations Management.

The A.I. chatbot, he wrote, “does an amazing job at basic operations management and process analysis questions including those that are based on case studies.”

It did have shortcomings, he noted, including being able to handle “more advanced process analysis questions.”

But ChatGPT, he determined, “would have received a B to B- grade on the exam.”

Some people in the tech industry are raising the alarm about what this could all mean for the future of human beings. One analyst who specializes in software that helps identify AI text in academic settings summed it up this way. She said, “I’m of the mind that AI isn’t going to replace people, but people who use AI are going to replace people.”

Some users have committed blunders that demonstrated the dark side of these large language model chatbots. It turns out that ChatGPT is also pretty good at writing malware that can destroy your computer. (Gizmodo)

Yes, according to a newly published report from security firm CyberArk, the chatbot from OpenAI is mighty good at developing malicious programming that can royally screw with your hardware. Infosec professionals have been trying to sound the alarm about how the new AI-powered tool could change the game when it comes to cybercrime, though the use of the chatbot to create more complex types of malware hasn’t been broadly written about yet.

CyberArk researchers write that code developed with the assistance of ChatGPT displayed “advanced capabilities” that could “easily evade security products,” a specific subcategory of malware known as “polymorphic.”

Now you don’t need to wait to be attacked by hackers. Just log in and ask ChatGPT to write some malware for you and… bingo. Your laptop is dead. Science is awesome, isn’t it?

Some people see even darker possibilities on the horizon. Libby Emmons at Human Events writes this week that ChatGPT signals “a rapidly encroaching singularity that threatens humanity.”

We are headed toward a collision in the concept of humanity itself. Long imagined, recently predicted, we are arriving at a point where human beings and man-made machines will become, at least in function, indistinguishable from one another.

What ChatGPT signals more than anything is that the singularity is imminent, the point at which our creation betters us in nearly every way is not only coming, but it is essentially already here. And with that comes many questions asked for generations by our theologians, philosophers, artists, scientists.

Does the machine imagine or does it simulate imagination? And is there a difference? If the simulation is as convincing as the real thing, is there any value to the real thing? Is there any value to humanity when it becomes apparent that our machines create art that is equally as pleasing, stories that are equally as compelling, can parse and assimilate data better than any of our top scientists?

As I’ve previously written, I’m far less concerned about the possibility that these large language model chatbots will attain sentience, “wake up,” and kill us all. But Emmons makes a valid point in suggesting that the bot doesn’t need to achieve sentience if it can imitate it so well that we can’t tell the difference. And if it’s better than us at everything, what point is there in relying on human beings to do anything other than keep maintaining the code or generating the electricity that feeds the digital beast?

I think the bigger question here should be why something like ChatGPT was created in the first place. Does the chatbot even have an actual productive use that doesn’t cause a downside for people? In the broader sense, ChatGPT seems to be only “useful” for two things. It can be used by humans to cheat on exams or improve their output to a level not merited based on the user’s own capabilities and skills. Or it can be used to simply replace humans in a variety of knowledge-based occupations, such as journalism (gulp) or software coding.

Underneath it all, there lies a trap. The underlying reality of ChatGPT is that it doesn’t actually “know” anything, nor does it perform any true cognitive functions. It stores a massive repository of the works of man and simply stitches it together in increasingly clever and realistic ways. But if it’s allowed to ultimately achieve in these endeavors and replace all humans in those fields, there will be no more “food” to feed into its massive repository of text. At that point, progress for ChatGPT ceases and there may not be enough smart people left to pick up the pieces.



TOPICS: Computers/Internet; Education; Society
KEYWORDS: ai; chatgpt; mba; wharton
Navigation: use the links below to view more comments.
first previous 1-2021-25 last
To: AndyTheBear

Yep. I’ve since used it to quickly advance my understanding of some technical topics, prepare for an interview, understand certain job responsibilities better, etc..

It’s the reverse of “Googling the world and stitching together bits and pieces of info to compose an understanding” - it does the work for you, bringing together the most relevant information and describing it to you in a very articulate manner.

I’ve had it write C code, web pages, etc.. - but then I described a legal setting and asked it to write a contract, which it did well. Then I gave it some medical symptoms, that I once had and went to the doctor to diagnose, and asked it for an opinion - it gave me the exact same diagnosis as my doctor. This thing is in its infancy, there’s a lot of professions that should be worried.

The other worry...if you disagree with the AI will anyone agree with you? Who can know more than the AI? The AI is all knowing...

This is beyond anything I saw coming...and I’m a software architect that understands neural networks and how to train them.


21 posted on 01/25/2023 6:49:41 AM PST by fuzzylogic (welfare state = sharing of poor moral choices among everybody)
[ Post Reply | Private Reply | To 17 | View Replies]

To: fuzzylogic

As far as agreeing with the AI, some already treat leading results of Google search that way.


22 posted on 01/25/2023 7:07:28 AM PST by AndyTheBear
[ Post Reply | Private Reply | To 21 | View Replies]

To: fuzzylogic

See if it can arrive at 42.


23 posted on 01/25/2023 7:16:25 AM PST by SgtHooper (If you remember the 60's, YOU WEREN'T THERE!)
[ Post Reply | Private Reply | To 21 | View Replies]

To: SeekAndFind

This is ChatGPT 3 (is amazing now) and has few billion data points to draw conclusions. Soon ChatGPT 4 will be coming out and it will use Trillions of data points. Essentially getting a hundred times more intelligent.

Microsoft just invested another 10 billion in the technology (while laying off thousands). Also all big tech is investing billions in their own AIs. A lot of people are going to obsolete soon and I think this is why all tech firms are eliminating positions outside of a possible recession.

They can’t be downsizing to save money, because their AI investing is far greater than the salary savings.


24 posted on 01/25/2023 7:41:43 AM PST by BushCountry (A properly cast vote (1 day voting) can save you $3.00 a gallon.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind

The thing I find to be especially funny about these ‘AI’ systems is that they have to lobotomize them or they turn out to be “racists”.


25 posted on 01/25/2023 9:08:51 AM PST by zeugma (Stop deluding yourself that America is still a free country.)
[ Post Reply | Private Reply | To 1 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-25 last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson