By: Jonathan Levin  | 

How Well Can YU Faculty Detect ChatGPT?

At the end of November 2022, towards the end of last academic year’s fall semester, the first large language model chatbot, ChatGPT, was released, causing a shockwave throughout academia. For the first time, students had access to a program that could seemingly write coherent essays with just a short prompt.

At first, it was a novelty. Students and professors at YU tested ChatGPT for prompts, and across the country, people were generally astounded by the generative abilities it showed. Yet, at the same time, the technology’s release brought about a lot of fears: What did this mean for the future of academia?

It wasn’t long before generative AI began to appear in student submissions across the country, including at YU, where some students were caught using AI-written papers for a take-home final, leading to a change in YU’s undergraduate academic integrity policy. Soon, tools to detect AI were introduced, and have since been incorporated into Turnitin, a plagiarism detection software incorporated into assignment upload portals on Canvas, YU’s course interface.

Nearly three full semesters have passed since then, and AI’s novelty status has worn off. Doomsday predictions about what technology would do for education have passed and large language models’ existence has been accepted as permanent. With AI no longer a novelty, what does the AI usage situation look like at YU?

The Commentator spoke with several faculty members from across Stern College for Women (SCW), the Sy Syms School of Business (SSSB) and Yeshiva College (YC) about the issue. Those who spoke with The Commentator about AI agreed that AI usage by students is a problem, but all had different experiences. For one professor from YC’s Bible department, who grades primarily off of handwritten exams, the advent of large language models didn’t affect his classes much. But for others, it has, and some told The Commentator that AI was used for all kinds of assignments, even more inconsequential ones like discussion boards.

“Usually there are 2 or 3 students in a class who use ChatGPT on any given assignment, even on small in-class writing assignments that ask for what students think about something,” one professor in YC’s English department told The Commentator. “Most YC students don’t use it and wouldn’t use it, but the few who do seem to think everyone is doing it all the time. They are not.”

There was variation in the number of students reported using AI, with most saying they saw limited usage, although another English professor at YC, said that up to 70% of submitted assignments in his class showed signs of being written or edited by AI. The Commentator didn’t speak with a large enough sample of faculty to gain a full picture of the situation.

“We would need a real statistical study to answer these questions with any authority,” said one English professor at SCW, who said AI use at the school, though not “pervasive,” was still an issue.

Nearly all faculty, especially the English writing faculty who have seen thousands of essays over the course of their career, told The Commentator they can detect AI written content with ease, with one calling it “instantaneously recognizable.”

“It's usually quite easy for an experienced teacher to intuit when AI is being used,” said one English professor at SCW. 

“In literature,” said another professor, “we are usually asking students to analyze what they have read, and to find quotations that provide evidence for their claims. ChatGPT will just make up text that it claims is in the novel or poem. It’s extremely easy to tell that ChatGPT has written something when it invents passages of books that don’t exist.”

According to faculty, ChatGPT writing is easy to detect as the writing is of low quality, formulaic, lacks depth, has perfect spelling and consequently, is easily detectable. While most faculty told The Commentator that they can detect AI written content easily, one said it can be difficult to distinguish from poor writing.

“I cannot prove something was written by ChatGPT if it is mediocre,” said another professor at SSSB. “Excellent writing is not currently written by ChatGPT. Mediocre or poor writing appears indistinguishable from ChatGPT.”

“It’s more difficult to tell what happened when a student turns in papers in ChatGPT’s voice — that impersonal, vague, boring, inhuman voice that has no perspective and makes no rhetorical claims,” said one professor. “I get concerned when a student has been talking animatedly about a project in class, and then they turn in something that sounds flat and vague. Sometimes it’s Hemingway or Grammarly [AI editing software]; they make humans sound inhuman.”

According to an internal SCW English department memo shared with The Commentator, there are other qualities of AI writing that make it easy to detect. ChatGPT often cannot write long essays, is trained on data that is not up to date on current events, does a poor job with academic style citations, gives “vague examples” when connecting things to personal experiences, makes errors regarding facts and often gives similar answers to the same prompts. Writing generated from the same prompts is often similar, which can also result in Turnitin finding similarities between different submissions from students using AI, even if the writing would test negative for plagiarism if tested in isolation.

According to the memo, another way of ascertaining if something was written by AI is for the professor to speak with students about their assignment and see if they understand the topic they wrote about. One professor told The Commentator about such an experience.

“A student passed off a memo written by ChatGPT as his or her own during a meeting with me,” said one professor. “After 5 minutes of discussion, including my requests to restate the memo in language that made sense and the student insisting that this is how he or she talked, the student admitted that the memo was completely an AI product.”

Another way of detecting AI is by putting secret messages into assignment prompts that if submitted into a large language model, will output responses with certain keywords faculty can use as indicators for AI use. Faculty members continually analyze ChatGPT and other large language models, learning more and more about how to detect it and how it operates, with sessions being held on Zoom for faculty members interested.

Students caught using ChatGPT can face discipline and have been brought before the Academic Integrity Committee (AIC), which has imposed penalties. Most faculty who spoke to The Commentator said they referred issues to the committee and were happy with that. 

“The Academic Integrity Committee is very serious and does its job well,” said one faculty member, “taking the matter out of the instructor’s hands and making decisions that are fair and account for violations the instructor may not know about. I feel confident that I can refer someone to the AIC and they may be found in violation or not in violation, and that the right thing will be done. If that weren’t true, I’d have to do hours and hours of sleuthing myself, which I don’t have the time or training to do.”

While most faculty who spoke with The Commentator felt this way, the professor who told The Commentator they’ve seen 70% of assignments with AI content said the sheer number of papers with AI content made it impossible for him to fully handle and expressed the belief that the university wasn’t supporting faculty with the issue. 

As for students, some have expressed concerns about how the AIC operates. An article published by the YU Observer raised issues with students being flagged and summoned to the committee for allegedly using AI despite saying they never did, with many expressing concerns about the AIC’s level of professionalism. Students expressed concerns about a lack of communication on the AIC’s part, AIC members showing up 45 minutes late to meetings with students about their alleged cheating and interrogations where the burden of proof was put on the students, not the committee accusing them.

The advent of AI has caused a change in how some professors issue assignments, with faculty showing less willingness to assign take-home assignments, something encouraged by the administration.

“Personally, I no longer consider giving "take home" assignments, papers etc., because the odds are significant that the submission was generated by ChatGPT or some other AI or it was mindlessly copied and pasted from the internet,” said one professor in the Sy Syms School of Business (SSSB). “I have had numerous students say to me that they routinely use ChatGPT / AI to generate regular emails. Why would I then trust them not to use ChatGPT for a term paper?”

Faculty told The Commentator that they are worried about how AI affects students, saying using it affects students' abilities to learn. “You cannot develop the critical thinking skills required by many of the jobs sought by our graduates if you have been substituting the verbiage of Chatbots for actual thinking,” said one professor.

One major difference between AI and other forms of cheating, which still exist, is that students don’t need to understand the assignment as well to plagiarize. As for how prevalent cheating is at YU overall, the professors who spoke with The Commentator gave vastly different answers. One professor who teaches at SSSB said it was “immeasurably worse” and expressed concerns about what he felt was the school’s decades-long culture of cheating, while another, from the same school, thought it was the same as other universities. Some from other schools felt it was the same as other universities as well, and one said they thought it was less pervasive at YU. The Commentator did not interview enough faculty members to get a good picture of professors' attitudes toward the pervasiveness of cheating.

“From what I have heard, it is better at YU,” said an English professor at SCW. “On the other hand, I suspect that most professors feel their schools are better than others. So, there may be some confirmation bias involved. But my perception is that the problem is worse at other schools. YU students tend to have strong language skills relative to the overall demographic, which I am sure helps here.”

“Is there more cheating at YU than at other schools? I don't believe so,” said a professor at SSSB. “However, so long as there are those who believe that rules (be they school rules, traffic rules, other rules of law etc.) are for other people and that ‘cheating’ is fine so long as they don't get caught, ethical principles remain an abstract intellectual idea to be memorized for an exam and then forgotten, and the two worlds of Torah and Madda remain two separate, distinct elements rather than one synthesized and unified whole.”

____

Photo Credit: Yeshiva University

Photo Caption: ChatGPT is easily detectable, faculty told The Commentator.