Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

NYU Professors Tell Their Students: Do Not Use The AI Tool, ChatGPT; It is Considered Plagiarism
Vice ^ | 01/26/2023 | Chloe Xiang

Posted on 01/26/2023 9:10:17 PM PST by SeekAndFind


SCREENSHOT OF NYU SYLLABUSES

School's back in session and the hottest topic is ChatGPT. New York University professors are prohibiting the use of the AI tool in the “academic integrity” sections of their syllabuses, and many students were given an explicit warning from professors on the first day of class not to use the bot to cheat on assignments.

The popular chatbot created by OpenAI, which can be used to generate everything from academic essays to news articles, has led many professors and teachers to be alert when it comes to the possibility that an essay has been plagiarized by a bot.

Jenni Quilter, the Executive Director of the Expository Writing Program and the Assistant Vice Dean of General Education in the College of Arts and Sciences at NYU, told Motherboard that professors are worried about their students using ChatGPT to cheat. Quilter said that both individual school departments and the central university have already provided guidelines to professors on how to handle a situation in which ChatGPT is used without permission.

“The situation has already come up—we had instances of students using ChatGPT in December,” Quilter said. “The repercussions for using ChatGPT without acknowledgment are the same as they would be for any case of academic plagiarism, and range from redoing the assignment to grade deductions and a report lodged with the Dean of that student's college.”

David Levene, who is a professor of Classics and the Chair of the Department of Classics at NYU, told Motherboard that he is keeping a close watch for any ChatGPT-related plagiarism.

“I've included an alert that it is banned unless used with my express permission as part of an assignment, and any use of it counts as plagiarism,” Levene said. “I also told [my students] (which is true) that I ran various essay-prompts through ChatGPT, and the essays it came up with were at best B- standard, and at worst a clear F. So (I told them) if they are hoping to get better than B- for the course, they should avoid it like the plague!”

In a class at NYU’s Tisch School of the Arts, the professor plainly wrote on the syllabus, “Q: Is using ChatGPT or other AI tools that generate text or content considered plagiarism? A: Yes.”

ChatGPT warnings have not just been limited to essay-based classes either. One macroeconomics syllabus that Motherboard saw said, “The time constraint is purposely tight so you will not have enough time to consult your books, ChatGPT, or other sources, and still complete all the questions on the Quiz. …Students may not communicate with anyone (including ChatGPT) during the 24 hours a Quiz is available.” Using ChatGPT to solve math problems may actually backfire as the app has already been proven to fail at even 6th-grade level math

The NYU professors’ concerns are not completely unfounded. According to a poll conducted by The Stanford Daily, 17 percent of Stanford students used ChatGPT to assist with their fall quarter assignments and exams. 

Since the release of the most recent version of ChatGPT in December, school districts and universities across the country have started to transform academic policies and teaching formats to prevent their students from cheating with the tool.

New York City’s education department was one of the first districts to ban student access to ChatGPT on school networks and devices in early January. The New York Times reported that professors are making changes such as requiring handwritten assignments rather than typed ones, and others are trying to incorporate ChatGPT into lessons, such as by evaluating its responses. 

OpenAI CEO Sam Altman addressed concerns about cheating and plagiarism in an interview with StrictlyVC, saying that teachers should modify their classrooms around new technology. “We're going to try and do some things in the short term. There may be ways we can help teachers be a little more likely to detect output of a GPT-like system. But honestly, a determined person will get around them," he said. “Generative text is something we all need to adapt to.” 

People are already developing methods to quickly spot whether something is AI-generated or not. For example, a computer science student at Princeton built GPTZero, an app that attempts to detect whether or not a body of text was human-written or AI-written. 

Turnitin, a plagiarism detection service through which students can submit writing assignments, announced that starting in 2023, it would begin incorporating a new tool that can detect AI-assisted and ChatGPT-generated writing.

“It is important to recognize that the presence of AI writing capabilities does not signal the end of original thought or expression if educators set the right parameters and expectations for its use,” the company wrote in a press release. “We encourage you to have these discussions at your institution now and set achievable standards and expectations for your students around the acceptable use of AI-assisted writing tools.”


TOPICS: Computers/Internet; Education; Society
KEYWORDS: ai; chatgpt; nyu; plagiarism
Navigation: use the links below to view more comments.
first previous 1-2021-32 last
To: SeekAndFind

Does this mean that all of the writers at Hallmark are no longer needed?


21 posted on 01/27/2023 1:28:40 AM PST by Laslo Fripp (Semper Fidelis)
[ Post Reply | Private Reply | To 1 | View Replies]

To: All

i saw another news article on ChatGPT passes MBA exam given by a Wharton professor. and also Professor Jonathan Choi, of the Minnesota University Law School, gave ChatGPT the same test faced by students, consisting of 95 multiple-choice questions and 12 essay questions.

In a white paper titled ChatGPT Goes To Law School published on Monday, he and his co-authors reported that the bot scored a C+ overall.

maybe i should use chatgpt to post freerepublic for me /s


22 posted on 01/27/2023 1:38:06 AM PST by VAFreedom (Wuhan Pneumonia-Made by CCP, Copyright Xi Jingping)
[ Post Reply | Private Reply | To 21 | View Replies]

To: Wayne07

I understand your point.
I am not a data scientist, but if this technology can be created, similar technology can discern between human-created content and machine-generated content. Perhaps not technically ‘steganography’ but something similar.
A previous poster mentioned using this tool to paraphrase content, then re-write it using actual research and composition. That makes sense, but would, I believe, defeat the reason that one attended college in the first place.


23 posted on 01/27/2023 1:56:28 AM PST by sonova (That's what I always say sometimes.)
[ Post Reply | Private Reply | To 11 | View Replies]

To: Wayne07

OpenAI guest researcher Scott Aaronson said at a December lecture that the company was working on creating watermarks for the outputs so that people could see signs of a machine-generated text.

++++++++++++++++++

It was in the original article. I Should have read further.


24 posted on 01/27/2023 2:07:26 AM PST by sonova (That's what I always say sometimes.)
[ Post Reply | Private Reply | To 11 | View Replies]

To: Nifster

Did you really?

It didn’t create the response until programmed to do so by your intervention.

[I would also ask: if 2 students in the same class used it, would the answers or papers written be the same?]


25 posted on 01/27/2023 2:35:19 AM PST by Adder (ALL Democrats are the enemy. NO QUARTER!!)
[ Post Reply | Private Reply | To 18 | View Replies]

To: SeekAndFind

The problem is our younger generations haven’t just collapsed academically: They’re squat ethically, too.


26 posted on 01/27/2023 2:39:31 AM PST by 9YearLurker
[ Post Reply | Private Reply | To 1 | View Replies]

To: sauropod

Bkmk


27 posted on 01/27/2023 4:43:06 AM PST by sauropod (“If they don’t believe our lies, well, that’s just conspiracy theorist stuff, there.”)
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind

and yet, we have The Plagiarist In Chief residing in The White House.

AND...

we have a National Holiday honoring a Whitewashed Character/Philanderer/Communist-Sympathizer who plagiarized 60% of what he wrote and said ~ Michael King


28 posted on 01/27/2023 5:22:55 AM PST by nevermorelenore ( If My people will pray ....)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Harmless Teddy Bear

Ummm - the AI goes out and gets info from what’s already written - I’d guess that since the internet got so full of data, 95% of all papers are cribbed from others and just reworded a bit to not be exact copies.


29 posted on 01/27/2023 5:36:46 AM PST by trebb (So many fools - so little time...)
[ Post Reply | Private Reply | To 3 | View Replies]

To: Adder

You didn’t write it. You turned it in with your name on it


30 posted on 01/27/2023 7:05:57 AM PST by Nifster (I see puppy dogs in the clouds )
[ Post Reply | Private Reply | To 25 | View Replies]

To: sonova

I looked up his blog, and he wrote an interesting, but long, post on how they could watermark text. The gist of it is, word sequence choice could be set in a way that would probabilistically only come from Chat GPT.

This is his explanation:

How does it work? For GPT, every input and output is a string of tokens, which could be words but also punctuation marks, parts of words, or more—there are about 100,000 tokens in total. At its core, GPT is constantly generating a probability distribution over the next token to generate, conditional on the string of previous tokens. After the neural net generates the distribution, the OpenAI server then actually samples a token according to that distribution—or some modified version of the distribution, depending on a parameter called “temperature.” As long as the temperature is nonzero, though, there will usually be some randomness in the choice of the next token: you could run over and over with the same prompt, and get a different completion (i.e., string of output tokens) each time.

So then to watermark, instead of selecting the next token randomly, the idea will be to select it pseudorandomly, using a cryptographic pseudorandom function, whose key is known only to OpenAI. That won’t make any detectable difference to the end user, assuming the end user can’t distinguish the pseudorandom numbers from truly random ones. But now you can choose a pseudorandom function that secretly biases a certain score—a sum over a certain function g evaluated at each n-gram (sequence of n consecutive tokens), for some small n—which score you can also compute if you know the key for this pseudorandom function.


31 posted on 01/27/2023 8:30:59 AM PST by Wayne07
[ Post Reply | Private Reply | To 24 | View Replies]

To: Wayne07

Yeah, that.


32 posted on 01/28/2023 7:50:10 AM PST by sonova (That's what I always say sometimes.)
[ Post Reply | Private Reply | To 31 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-32 last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson