Posted on 04/21/2025 5:06:41 AM PDT by karpov
By now, most North Carolinians are at least somewhat familiar with Generative AI (GenAI). As tech journalist George Lawton explains, GenAI “uses sophisticated algorithms to organize large, complex data sets into meaningful clusters of information in order to create new content, including text, images and audio, in response to a query or prompt.” It is the foundation of numerous platforms, including Open AI’s ChatGPT and Dall-E, as well as Google’s Gemini. And it is either a bane or a boon, depending on one’s perspective—especially, perhaps, in the field of education.
Since OpenAI publicly released ChatGPT-3.5 in November 2022, students have increasingly relied on GenAI to complete assignments. According to a recent survey, 88 percent of full-time undergraduates admitted to using GenAI for assessments. Administrators and instructors are still struggling to meet the challenges that GenAI presents. (When) is it acceptable for students to use GenAI? (How) should students be permitted to use it? (How) should we address GenAI in our classes? (What) should we teach our students about it?
Responses at the institutional level in North Carolina seem to have been, generally speaking, prudently cautious: providing overviews of the technology, recognizing its shortcomings, situating it within the context of academic integrity, and ultimately deferring to individual instructors to make their own specific policies. See, for example, Duke University’s statement on Artificial Intelligence Policies, UNC-Chapel Hill’s Research Generative AI Guidance, Wake Forest University’s Academic Integrity FAQ, and Wake Tech’s Generative Artificial Intelligence policy.
Instructors’ attitudes toward student use of GenAI run the gamut, but they seem to fall into either of two broad categories.
The Alarmist Attitude: This response is grounded in the view that GenAI is more than merely disruptive of current practices but is potentially apocalyptic in its consequences for education as a field.
(Excerpt) Read more at jamesgmartin.center ...
Public Education has become a race to the bottom.
If used properly, it might be good. But it isn’t.
Alas, students don’t see it for the tool is can be, but rather as a way to have somebody (something?) do their work for them. Instead of creating minds that collect, organize, and present information in a paper, they let AI do the work...including the writing. How does this develop creative/critical thinking?
Solution? I taught university classes for almost 40 years. The Intro students always got True/False or multiple choice exams. The first two universities I taught at didn’t have TA’s to grade papers. In the major classes, those were always essay-type exams. Grading was a pain, but are always a better measure of what’s been learned.
I’m retired now, but I’ll bet anyone teaching upper-division classes in a major see vast differences in the “homework” versus the “exam” quality. So...who should get the degree: The AI platform or the student? If I’m an employer, you can bet I’ll have some kind of written element to the hiring process to see who’s degree I’m hiring.
It is designed to do all our thinking for us and turn us into mindless Zombie Slaves.
“If used properly, it might be good. But it isn’t.”
And it never will be. Count on it...
Not really. Far better to have them engage with hard-copy books.
I don’t personally expect them to do it right, because the billionaire just wants well paid drones. However, it could be like a personal tutor that helps pull you along.
They don’t want it to be used properly.
But would virtually never be used as such.
Instead you’ve got 4th graders now typing in to genAI to get the answer to 10 + 12. For the college level, you can hardly hope to have students write on their own unless it is with pen and paper in a classroom, closely watched. And reading? Forget more than a couple of pages a source.
Generative “Artificial Intelligence” is built entirely around intellectual theft and they are surprised that it is being used in an unethical way?
“They don’t want it to be used properly.”
But it is... It s a tool to enslave and it is absolutely working properly to do what it was intended to do.
They have folks begging for it and their own enslavement...
It is the stupidest thing I have ever seen yet in my lifetime...
We have had the regular internet for decades now and it doesn’t appear that students have gotten any smarter . I doubt some magic ai will suddenly turn things around
I disagree.
I use ChatGPT all the time to summarize YouTube videos that I don’t have time to watch. Saves me an incredible amount of time, and I learn a lot.
Yesterday, I uploaded Pictures of a kitchen cabinet that had some faded spots on it, a picture of the melted plastic in the bottom of the microwave, and a picture of stains on my bathtub rim where shampoo bottles sit.
Chat GPT advised me on what products to buy to restore the finish on the kitchen cabinets, how to repair the microwave but suggested I retire it, and products to buy to remove the stains on the bathtub.
Students need to be taught how to use AI, and how to verify what AI comes back with because of the Hallucinations problem.
Learning requires discipline and work,by actively engaging with information, not by having it served up on a silver platter. I sometimes find AI handy for finding information quickly, but I can use that profitably because I have already learned how to learn.
AI is an amazing tool FOR THOSE THAT WANT TO LEARN.
It can also be a lazy short cut for those that don’t. You can even ask it to write a paper ‘at a 6th grade level’ (or whatever), it’ll do a fabulous job.
It’s like giving a calculator to a kid to ‘learn’ multiplication.
BTTT
Article’s wrong. For example, inexpensive calculators made students better at math. Oh, wait…
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.