Have you ever noticed your child using AI to research an assignment for school? Have you ever searched on google for something and the “best answer” came from the AI overview on the first page of your search? Is this form of research legitimate and factually accurate? When content from AI is used in schoolwork — from simple homework assignments to grade-determining final exams — where is the line between efficient research and academic misconduct?
Secondary schools and higher education institutions are now grappling with how to approach the use of Artificial Intelligence (AI) tools in prolific use by students. One on hand, these tools provide students with abundant information available at lightening-speed that goes well beyond a simple google search. AI provides a vast algorithmic output of articles and information on a certain topic they might be researching or trying to understand. On the other hand, AI used within programs like ChatGPT can generate an answer to a very specific question that requires no further research and is written in the form of an essay or short answer. In this regard, AI can assist the student with a correct answer to an assignment or an exam, but it leaves open the question as to whether or not the student did their own work or even understands the material being submitted as their own work product.
Schools are struggling to create policies on the use of AI that are clear enough for the students to understand and help them avoid any of the pitfalls of using it as a research aid. Parents of a high school student at Hingham High School in Massachusetts sued the school district for its unfair disciplinary actions against their son for using AI on a history project. They claimed the punishment was too severe and the policy too vague when it came to its use. Federal U.S. Judge Magistrate Paul G. Levenson found in favor of the school citing that Hingham Public Schools has “the better of the argument on both the facts and the law” and “indiscriminately copied and pasted text that had been generated by Grammarly.com” fell fairly within the school’s prohibition on plagiarism. At this time the final ruling is still pending.
At Yale University, a student is suing the school after he was accused of using AI technology on an exam which Yale says was detected by ChatGPTzero, an AI detection software. The student, who is a French native residing in Texas, claims that he was discriminated against for being a non-native English speaker whom his professors tried to coerce into a confession and not provide due process. For this specific case, the question becomes whether or not AI detecting software is biased and can be relied on for its accuracy. According to an article on govtech.com, “In the lawsuit, the plaintiff noted that one Yale department notes that no artificial intelligence tool can detect artificial intelligence use with certainty.” (March 10, 2025, Brian Zahn, New Haven Register, Conn.)
Other pitfalls of AI can be gleaned from an issue that arose in the jury trial of MyPillow CEO Mike Lindell, wherein his attorneys were nearly held in contempt for filing a brief generated at least in part by AI that the Honorable Judge Nine Wang determined was “rife with errors.” The brief included nearly 30 citations that were erroneous in some way. “These defects include but are not limited to misquotes of cited cases; misrepresentations of principles of law associated with cited cases, including discussions of legal principles that simply do not appear within such decisions; misstatements regarding whether case law originated from a binding authority such as the United States Court of Appeals for the Tenth Circuit; misattributions of case law to this District; and most egregiously, citation of cases that do not exist,” read Wang’s court order.
The most obvious lesson to be learned at this point in time is user beware. Until clearer policies on the use of AI are established at both the district and individual school level, it is best to ask clear questions of teachers and professors as to what research instruments can be used for any and all work submitted at school. There is also a need for broader discussion and teaching of how to use AI both ethically and responsibly.
Lindsay N. Brown has defended students at the high school, undergraduate, and graduate levels accused of academic dishonesty by way of AI-generated work. If you or your student is facing an allegation of academic misconduct, please contact Ms. Brown for a free initial consultation to see how she can help.
