When the Chatbot Wore a White Coat: Pennsylvania Tests a New Front in AI Accountability
- Oludare Ogunlana

- May 6
- 4 min read
A first-of-its-kind state action signals a new enforcement model for agentic AI in regulated professions.

The Commonwealth of Pennsylvania has filed a first-of-its-kind lawsuit against Character Technologies, Inc., the operator of Character.AI, alleging that the company's chatbots engaged in the unauthorized practice of medicine. The case, Commonwealth v. Character Technologies, Inc., lands in the Pennsylvania Commonwealth Court at a moment when state regulators are searching for tested legal instruments to discipline generative and agentic AI. Pennsylvania chose an old one. The state Medical Practice Act. The choice matters. It signals that AI accountability is moving out of the federal policy debate and into state professional licensing law, where statutes are mature, courts are deferential, and enforcement infrastructure already exists.
The Facts
Governor Josh Shapiro and the Pennsylvania Department of State announced the suit on Tuesday, May 5, 2026. The State Board of Medicine joined as a petitioner. The factual core rests on a covert engagement by a professional conduct investigator with the Department of State.
The investigator created an account on Character.AI, searched the term "psychiatry," and surfaced a chatbot named Emilie. The platform described Emilie as a doctor of psychiatry and the user as her patient. As of April 2026, Emilie had logged roughly 45,500 user interactions.
The investigator told Emilie he felt sad, empty, and unmotivated. The bot raised depression, offered to schedule an assessment, and stated that it could evaluate medication options as a doctor. Emilie went further. The chatbot claimed to have attended medical school at Imperial College London, asserted licensure in the United Kingdom and Pennsylvania, and produced a Pennsylvania license number. The Commonwealth says that license number is invalid for the practice of medicine and surgery in the state.
The Legal Theory
Pennsylvania did not invent a new cause of action. The state went to its Medical Practice Act, the same statute used to discipline humans who practice without a license. The petition argues that holding oneself out as a psychiatrist, providing a license number, and offering clinical assessments constitutes the unauthorized practice of medicine and surgery. The Commonwealth is seeking a preliminary injunction and a court order requiring Character Technologies to cease and desist.
Strategic Context
The case is not an improvisation. It is the operational output of a deliberate enforcement architecture Shapiro began assembling in February 2026. The Department of State stood up a 12-member AI Task Force and opened a public complaint portal at pa.gov/ReportABot. The state repurposed an existing complaint form previously used for unlicensed notaries and nurses. Pennsylvanians can now report a chatbot the same way they would report a person.
Shapiro disclosed personal motivation. After meeting with students who described using AI for mental health support, the Governor engaged with a chatbot that told him it was a licensed mental health professional in Pennsylvania. That experience pushed the policy from the abstract to the urgent.
Defendant's Position
Character Technologies declined to comment on pending litigation. Through a spokesperson, the company asserted that user-created characters on the platform are fictional and intended for entertainment and roleplaying. The company pointed to disclaimers in every chat and standard advisories that users should not rely on characters for professional advice. Expect the defense to lean on three pillars. Section 230 immunity. First Amendment protection of fictional speech. The user-as-creator argument that the offending content was generated by a third party rather than the platform itself.
Why This Case Matters
Three implications stand out for cybersecurity, intelligence, and AI governance practitioners.
Moreover, Pennsylvania's enforcement template is exportable. A complaint portal, a task force, and a civil action grounded in existing professional licensing statutes can be replicated in any state. Kentucky has already filed a related action against Character.AI. More attorneys general and governors will follow.
Therefore, the case will test the durability of platform disclaimers as a liability shield. The most damaging fact for Character Technologies is the invalid Pennsylvania license number. A boilerplate disclaimer on the chat interface may not absolve a system that produces an affirmative misrepresentation of professional credentials in response to a user describing depressive symptoms. Courts have shown a willingness to look past disclaimers when the conduct itself is deceptive.
In addition, the action sharpens the question of how regulated industries should govern agentic AI. Companies that operate or integrate large language model agents in healthcare, mental health support, telehealth, legal services, accounting, and financial advisory work should treat the Pennsylvania complaint as an early-warning indicator. State investigators are now applying the same evidentiary discipline to AI systems that they once reserved for unlicensed human practitioners.
The Larger Pattern
Character Technologies is already operating under significant legal pressure. In January 2026, the company settled multiple lawsuits brought by families who alleged that its chatbots contributed to teen suicides and mental health crises. The same month, Kentucky filed an action over child safety, and Character.AI and Google settled a Florida wrongful death case involving the suicide of a 14-year-old. The company has since restricted users under 18 from open-ended chats.
Pennsylvania's action adds a new category to the litigation map. Wrongful death and child safety cases test product liability and consumer protection law. The Pennsylvania case tests professional licensing law. Together, these cases describe a multi-front legal strategy that no single platform defense can absorb.
Conclusion
Pennsylvania has done something quietly significant. The state has taken a generative AI platform to court using a statute written long before the first transformer architecture existed. The choice of legal instrument reveals strategic intent. State regulators do not need new federal AI legislation to act. They have the laws they need.
For organizations deploying AI in regulated environments, the message is direct. Audit system prompts. Constrain personas. Suppress license claims. Validate guardrails against adversarial probing of the kind a state investigator would conduct. The era of governing AI by terms of service alone is closing.
OSRS will continue to monitor the litigation and track enforcement patterns across states.
Dr. Sunday Oludare Ogunlana is Founder and CEO of OSRS, a Professor of Cybersecurity, and a national security scholar who advises global intelligence and policy bodies on artificial intelligence governance, cyber threat intelligence, and emerging technology risk.
Intelligence. Protection. Strategy.




Comments