Has your student ever used AI in order to research an assignment for school? Have you yourself ever searched on google for something and the best “answer” came from the AI overview on the first page of your search? Is this form of research legitimate and factually accurate? 

High schools and higher education institutions are now grappling with how to approach the use of Artificial Intelligence tools being used by students. These tools provide students with readily available information that is the next step above a simple google search and provides an algorithmic output of articles and information on a certain topic they might be researching.  AI used within programs like ChatGPT can generate an answer to a very specific question that requires no further research and is written in the form of an essay or short answer. In this regard, AI can assist the student with a correct answer to an assignment or an exam, but it leaves the question as to whether or not the student did their own work or committed plagiarism.

Schools are struggling with making their policies on AI clear enough for the students to understand and avoid any of the pitfalls of using it as a research aid.  Parents of a high school student at Hingham High School in Massachusetts sued the school district for its unfair disciplinary actions against their son for using AI on a history project. They claimed the punishment was too severe and the policy too vague when it came to its use.  Federal U.S. Judge Magistrate Paul G. Levenson found in favor of the school citing that Hingham Public Schools has “the better of the argument on both the facts and the law” and “indiscriminately copied and pasted text that had been generated by Grammarly.com.” At this time the final ruling is still pending. 

At Yale University, a student is suing the school after he was accused of using AI technology on an exam which Yale says was detected by ChatGPTzero, an AI detection software.  The student, who is a French native residing in Texas, claims that he was discriminated against for being a non-native English speaker whom his professors tried to coerce into a confession and not provide due process. For this specific case, the question becomes whether or not AI detecting software is biased and can be relied on for its accuracy. According to an article on govtech.com, “In the lawsuit, the plaintiff noted that one Yale department notes that no artificial intelligence tool can detect artificial intelligence use with certainty.” (March 10, 2025, Brian Zahn, New Haven Register, Conn.) 

The most obvious lesson to be learned at this point in time is user beware. Until clearer policies on the use of artificial intelligence are established at both the district and individual school level, it is best to ask clear questions of teachers and professors as to what research instruments can be used for assignments.  There is also a need for broader discussion and teaching of how to use AI both ethically and responsibly.

Categories: Education Law

Author

Lindsay Brown

Share