top of page

When Artificial Intelligence Gets It Wrong: Five Cases That Should Alarm Every Security Professional



A Technology That Moves Faster Than the Law


Angela Lipps had never set foot in North Dakota. She had never been on an airplane. She spent nearly her entire life within a small radius of her home in rural Tennessee, raising children, babysitting grandchildren, and living quietly. Then, one morning in July 2025, a team of U.S. Marshals arrived at her door with weapons drawn. An artificial intelligence system had decided she was a fugitive.

Lipps spent more than five months in jail before her lawyer proved, using basic bank records, that she had been in Tennessee the entire time. She lost her home, her car, and her dog. The Fargo Police Department has not apologized.


Her story is not an anomaly. It is part of a documented, growing pattern of artificial intelligence systems being deployed in high-stakes environments, including law enforcement, healthcare, and national security, without adequate safeguards, human oversight, or accountability structures. For military and intelligence practitioners, cybersecurity professionals, policymakers, and law enforcement leaders, this pattern carries urgent operational and strategic implications.


Five Cases That Reveal a Systemic Failure

The following cases represent a range of sectors and technologies. Each illustrates a shared root cause: AI systems being used to make decisions, not assist them.


  • Angela Lipps, Fargo, North Dakota (2025): Clearview AI matched Lipps to bank fraud surveillance footage. A detective reviewed her social media and concluded she matched the suspect. No alibi check was made. No one called her. She was arrested at gunpoint in front of four children she was babysitting, extradited across the country, and jailed for 163 days.

  • Robert Williams, Detroit, Michigan (2020): The first publicly documented case of a wrongful arrest driven by facial recognition. Detroit police ran blurry store surveillance footage through a state database and arrested Williams outside his home in front of his wife and young daughters. He was the ninth-closest match in the database. A landmark 2024 settlement now requires Detroit police to corroborate any facial recognition result with independent evidence before seeking a warrant.

  • Porcha Woodruff, Detroit, Michigan (2023): Woodruff was eight months pregnant when Detroit police arrested her for carjacking based on a facial recognition match. She was detained and interrogated for eleven hours. The suspect had shown no signs of pregnancy. The case was dropped weeks later. A civil rights lawsuit is ongoing.

  • Michael Williams, Chicago, Illinois (2021): Williams spent nearly a year in pretrial detention for murder after ShotSpotter, an AI-powered acoustic gunshot detection system, placed him at the scene of a shooting. Police had no witnesses, no physical evidence, and no established motive. Investigators also ignored separate leads pointing to another suspect. The charges were eventually dismissed.

  • UnitedHealthcare Medicare Advantage Denials (2023 to Present): A class action lawsuit advancing through federal court alleges that UnitedHealthcare used an AI tool called nH Predict to deny post-acute care claims for elderly Medicare Advantage patients, including nursing home and home healthcare services, without physician review. Plaintiffs allege the tool carries a 90% error rate on appeal, and that the insurer relied on it knowingly because fewer than 0.2% of policyholders appeal denied claims. In some cases, denial of care is alleged to have contributed to patient deaths.


The Common Thread: AI as Decision-Maker, Not Decision Aid

Across every case above, the critical failure was not the technology itself. It was how the technology was positioned within a decision-making process. AI tools in these cases operated not as analytical aids to be verified by trained human judgment, but as authoritative outputs that closed investigations before they were opened.


A January 2025 Washington Post investigation found that in every documented case of wrongful arrest tied to facial recognition, investigators skipped basic steps such as checking alibis, verifying physical descriptions, or reviewing transaction records that would have cleared the suspect before an arrest warrant was ever signed. Clearview AI itself requires agencies using its platform to acknowledge that results are indicative, not definitive, and that further investigation is required before taking action. In at least five of seven documented wrongful arrest cases reviewed by the ACLU, police received that explicit warning and made arrests anyway.


This is not an argument against AI. It is an argument for governance. The question is not whether artificial intelligence belongs in law enforcement or healthcare. It already is there. The question is whether the institutional frameworks surrounding its use are adequate to prevent the kind of catastrophic errors documented above.


What Security and Policy Professionals Must Understand

For intelligence and security practitioners, these cases carry direct operational lessons:

  • AI bias is a threat surface. Facial recognition tools are demonstrably less accurate for people of color. Research from the National Institute of Standards and Technology found that Black and Asian individuals were misidentified at rates 10 to 100 times higher than white individuals. In a law enforcement context, this is not merely a civil liberties concern. It is an intelligence integrity issue.

  • Automation bias degrades analytical tradecraft. When an algorithm produces a match, human reviewers often anchor to that result and stop investigating. This mirrors confirmation bias in traditional intelligence analysis, and it is being institutionalized at scale.

  • Regulatory gaps create institutional liability. As of early 2025, only 15 U.S. states had enacted any facial recognition legislation governing law enforcement. North Dakota, where Angela Lipps was prosecuted, was not among them. Organizations deploying AI in high-stakes decisions without policy guardrails are accumulating legal, reputational, and operational risk.

  • Accountability structures must be defined before deployment. The UnitedHealthcare lawsuit raises a question that will increasingly reach every sector: when an AI system makes a harmful decision, who is responsible? The algorithm developer, the deploying institution, the individual operator, or some combination of all three?


The Strategic Imperative: Govern AI Before It Governs Outcomes

The five cases examined here represent far more than individual tragedies. They are institutional intelligence failures. In each instance, a technology was allowed to narrow human judgment rather than inform it, and people paid the price with their liberty, their health, and in some cases their lives.


As AI systems become embedded in law enforcement workflows, healthcare adjudication, and national security operations, the security professionals, intelligence practitioners, and policymakers reading this have a responsibility to ask hard questions of every AI-assisted process within their institutions: What is the error rate? Who reviews the output? What corroborating evidence is required before action is taken? And critically, what happens when the system is wrong?


At OGUN Security Research and Strategic Consulting LLC, we advise government agencies, private sector organizations, and institutional clients on AI governance frameworks, cybersecurity posture assessment, and intelligence-grade risk analysis. If your organization is deploying or evaluating AI tools in sensitive or high-stakes environments, contact OSRS today at contact@ogunsecurity.com or call (469) 469-3877. Visit us at www.ogunsecurity.com.


If this article informed your thinking, share it with your network on LinkedIn, X (Twitter), and Facebook. Help your colleagues and peers stay ahead of the AI governance conversation.

Subscribe to our email list at www.ogunsecurity.com to receive exclusive intelligence briefs, cybersecurity analyses, and policy insights delivered directly to your inbox.


Follow OSRS on Google News, Twitter/X, and LinkedIn for more exclusive cybersecurity insights and expert analyses.


About the Author: Dr. Sunday Oludare Ogunlana is Founder and CEO of OGUN Security Research and Strategic Consulting LLC, a Professor of Cybersecurity, and a national security scholar who advises global intelligence and policy bodies on emerging threats at the intersection of technology, law, and governance.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page