A New York court just ruled that Adelphi University's AI plagiarism accusation against a student was "without valid basis and devoid of reason." It's being called a groundbreaking case — and it's not the only one.

In February 2026, a New York state Supreme Court judge ruled that Adelphi University's AI plagiarism accusation against freshman Orion Newby was "without valid basis and devoid of reason" — and ordered the school to expunge his record. The case is being called "groundbreaking," and it's far from the only one. A pattern of lawsuits is emerging across higher education, each exposing the same systemic failures: unreliable detection tools, unclear policies, and inadequate due process.

Here's what happened, why it matters, and what schools must do now.

The Newby Case

Orion Newby, a student with documented learning and neurological differences, submitted a history paper he wrote with help from tutors in Adelphi's Bridges program — a university-provided support system for students with disabilities. His professor ran the paper through Turnitin's AI detection tool, which flagged it as AI-generated. Newby received a zero, an academic integrity violation, and the threat that a second offense could mean expulsion.

Despite Newby's explanation that he spent 15–20 hours on the paper with tutoring support, the university upheld the finding and denied his appeal. His family spent over $100,000 in legal fees to clear his name. In January 2026, state Supreme Court Judge Randy Sue Marber sided fully with Newby, finding the accusation baseless and the process fundamentally unfair.

His attorney, former U.S. attorney Mark Lesko, described the ruling as "groundbreaking" — particularly for establishing that students deserve due process in AI-related academic disputes.

A Pattern, Not an Isolated Case

The Newby ruling didn't emerge in a vacuum. It's part of a rapidly growing wave of lawsuits challenging how schools handle AI accusations:

A Pattern, Not an Isolated Case

A Pattern, Not an Isolated Case

Lawsuits challenging how schools handle AI accusations are accelerating — with students with disabilities and non-native English speakers disproportionately targeted.

5
Major lawsuits
filed since 2024
3
Cases involving
bias allegations
0
Schools with clear
AI policies at time
2024
Hingham High School
Massachusetts
A student was disciplined for using AI to research a history project — receiving detention, a failing grade, and initially barred from National Honor Society. The school had no explicit AI policy.
No AI Policy
2025
Yale School of Management
New Haven, Connecticut
An executive MBA student sued after being suspended for a year and given a failing grade. His answers were flagged for being "unusually long and elaborate" with "near perfect punctuation" — the student is a non-native English speaker.
Language Bias Alleged
2025
University of Minnesota
Minneapolis, Minnesota
A Ph.D. student was expelled after faculty accused him of using AI on a preliminary exam. He is a non-native English speaker and is seeking over $1.3 million in damages.
Language Bias Alleged
2026
Adelphi University
Garden City, New York
Orion Newby sued after being accused of AI plagiarism and won — a landmark ruling. The court found the university relied on unreliable detection tools and failed to provide due process.
Student Won
2026
University of Michigan
Ann Arbor, Michigan
A student with documented anxiety and OCD sued after being accused of AI use three times. The lawsuit alleges her disability-related writing traits — formal tone, meticulous structure — were treated as evidence of AI use.
Disability Bias Alleged

The common thread: institutions relying on unreliable detection tools, applying unclear or nonexistent policies, and failing to give students a fair process.

Sources: CBS News, Education Week, Yale Daily News, Poets&Quants, Inside Higher Ed, EdScoop · Graphic by Rumi Technologies

The Detection Problem

At the center of many of these disputes are AI detection tools that research consistently shows are unreliable. OpenAI shut down its own detector after it correctly identified only 26% of AI-written text while falsely flagging 9% of human-written work. Stanford researchers found that detectors misclassified over 61% of essays by non-native English speakers as AI-generated. And Times Higher Education showed that simple prompt engineering could reduce Turnitin's detection rate from 100% to zero.

Major institutions have taken notice. UCLA and several UC campuses declined to adopt Turnitin's AI detection feature. The University of Minnesota does not support or recommend any AI detection tool. The MLA-CCCC Joint Task Force on Writing and AI has urged educators to move away from punitive detection approaches entirely.

What Schools Should Do Now

These cases send a clear message: the current approach is legally, ethically, and educationally unsustainable. Schools should:

Stop treating AI detection scores as proof. No detection tool is reliable enough to serve as the sole basis for an integrity finding. These scores should be one input among many — never the final word.

Establish clear, flexible AI policies. Students and faculty need specific expectations at the assignment level — whether AI use is prohibited, partially allowed, or fully permitted — communicated before work begins.

Ensure meaningful due process. Students must have a real opportunity to be heard, especially when they can provide evidence (tutoring records, drafts, process documentation) that contradicts a detection tool's output.

Shift from detection to process visibility. Rather than trying to catch AI use after the fact, invest in tools that reveal how work is actually produced — drafts, revisions, and individual contributions.

Protect vulnerable students. Students with disabilities, non-native English speakers, and those receiving tutoring support are at heightened risk of false accusations. Policies and adjudication processes must account for this.

Our Perspective

At Rumi, we've always believed that the future of academic integrity isn't about catching students — it's about understanding how they learn. Our platform captures the complete writing process, giving instructors visibility into each student's actual work without relying on the black box of AI detection. We also give institutions the flexibility to set AI policies at the assignment level, ensuring clarity for students and consistency across departments.

The Newby ruling — and the lawsuits now following it — reinforce what we've been saying: the answer isn't better detection. It's better visibility.

Learn how Rumi supports AI Literacy and Academic Integrity