In the case of Miller v. Yu (Southern District of Florida), opposing counsel—L.A. Perkins—recently proposed a clause in the protective order that caught me completely off guard. Here’s the exact language:
“All parties and non-parties are strictly prohibited from uploading, inputting, transmitting, or otherwise sharing or distributing documents or information designated as ‘CONFIDENTIAL,’ in whole or in part, into any artificial intelligence (AI) systems, platforms, software, tools, generative models, or machine-learning applications.”
Let that sink in.
They’re trying to prohibit the use of any AI tool if a document might be labeled “CONFIDENTIAL”—which they can do after the fact.
That includes:
- ChatGPT or Claude for summarizing or drafting
- Legal research platforms powered by machine learning
- Grammarly or even Google Docs autocorrect in theory
If this sounds ridiculous to you, that’s because it is. In 2025, it’s almost impossible to function professionally without some kind of AI-assisted tool. Trying to ban “uploading” anything into a generative model without defining terms like “uploading” or “sharing” opens the door to endless confusion—and selective enforcement.
Even more troubling: there’s no requirement that the “CONFIDENTIAL” designation be mutually agreed upon in advance. So the clause could apply retroactively, meaning you might already be in violation before you even know it.
This is not about protecting trade secrets or private data. It’s about using vague legal language to control speech and restrict the use of everyday tools under the guise of confidentiality.
I’ve pushed back, of course. This kind of legal overreach can’t be normalized—especially when it threatens how we communicate, collaborate, and speak freely in public forums. If AI is banned from the courtroom and the blogosphere, that’s not a protective order—it’s a muzzle