In February 2026, a New York state Supreme Court judge ruled that Adelphi University's AI plagiarism accusation against freshman Orion Newby was "without valid basis and devoid of reason" — and ordered the school to expunge his record. The case is being called "groundbreaking," and it's far from the only one. A pattern of lawsuits is emerging across higher education, each exposing the same systemic failures: unreliable detection tools, unclear policies, and inadequate due process.
Here's what happened, why it matters, and what schools must do now.
The Newby Case
Orion Newby, a student with documented learning and neurological differences, submitted a history paper he wrote with help from tutors in Adelphi's Bridges program — a university-provided support system for students with disabilities. His professor ran the paper through Turnitin's AI detection tool, which flagged it as AI-generated. Newby received a zero, an academic integrity violation, and the threat that a second offense could mean expulsion.
Despite Newby's explanation that he spent 15–20 hours on the paper with tutoring support, the university upheld the finding and denied his appeal. His family spent over $100,000 in legal fees to clear his name. In January 2026, state Supreme Court Judge Randy Sue Marber sided fully with Newby, finding the accusation baseless and the process fundamentally unfair.
His attorney, former U.S. attorney Mark Lesko, described the ruling as "groundbreaking" — particularly for establishing that students deserve due process in AI-related academic disputes.
A Pattern, Not an Isolated Case
The Newby ruling didn't emerge in a vacuum. It's part of a rapidly growing wave of lawsuits challenging how schools handle AI accusations:
The Detection Problem
At the center of many of these disputes are AI detection tools that research consistently shows are unreliable. OpenAI shut down its own detector after it correctly identified only 26% of AI-written text while falsely flagging 9% of human-written work. Stanford researchers found that detectors misclassified over 61% of essays by non-native English speakers as AI-generated. And Times Higher Education showed that simple prompt engineering could reduce Turnitin's detection rate from 100% to zero.
Major institutions have taken notice. UCLA and several UC campuses declined to adopt Turnitin's AI detection feature. The University of Minnesota does not support or recommend any AI detection tool. The MLA-CCCC Joint Task Force on Writing and AI has urged educators to move away from punitive detection approaches entirely.
What Schools Should Do Now
These cases send a clear message: the current approach is legally, ethically, and educationally unsustainable. Schools should:
Stop treating AI detection scores as proof. No detection tool is reliable enough to serve as the sole basis for an integrity finding. These scores should be one input among many — never the final word.
Establish clear, flexible AI policies. Students and faculty need specific expectations at the assignment level — whether AI use is prohibited, partially allowed, or fully permitted — communicated before work begins.
Ensure meaningful due process. Students must have a real opportunity to be heard, especially when they can provide evidence (tutoring records, drafts, process documentation) that contradicts a detection tool's output.
Shift from detection to process visibility. Rather than trying to catch AI use after the fact, invest in tools that reveal how work is actually produced — drafts, revisions, and individual contributions.
Protect vulnerable students. Students with disabilities, non-native English speakers, and those receiving tutoring support are at heightened risk of false accusations. Policies and adjudication processes must account for this.
Our Perspective
At Rumi, we've always believed that the future of academic integrity isn't about catching students — it's about understanding how they learn. Our platform captures the complete writing process, giving instructors visibility into each student's actual work without relying on the black box of AI detection. We also give institutions the flexibility to set AI policies at the assignment level, ensuring clarity for students and consistency across departments.
The Newby ruling — and the lawsuits now following it — reinforce what we've been saying: the answer isn't better detection. It's better visibility.
