How New U.S. Policies on Police Use of AI Are Reshaping Public Safety
- Oludare Ogunlana
- Dec 9
- 3 min read

Artificial intelligence is rapidly changing the way police agencies work across the United States. From drafting reports with generative-AI tools to using facial recognition and automated license plate readers, these technologies bring new possibilities and new risks. This article explores the latest updates in state, city, and federal policies that govern how police can use AI. It offers a clear guide for beginners, students, researchers, cybersecurity professionals, and policymakers who want to understand where the law is heading. This is your focus keyword and this article’s meta description anchor.
1. Why States and Cities Are Updating Their Police AI Policies
Across the country, lawmakers are paying closer attention to how police agencies deploy AI. Several factors are driving this momentum.
Growing public concern about surveillance.
The rise of generative AI tools that can shape police reporting.
Increased reliance on automated systems such as facial recognition and ALPR.
Recent policy changes highlight the shift toward more transparency and accountability. Many local governments now require police departments to publish inventories of the AI tools they use. Others seek to limit certain practices, especially when they involve identifying individuals or predicting crime patterns.
2. Facial Recognition and ALPR: New Guardrails on Powerful Tools
Facial recognition technology is one of the most scrutinized forms of police AI. States and cities are updating policies to create stronger safeguards. Key trends include:
Requirements for warrants before police can use facial recognition.
Restrictions on using these tools as the sole basis for arrests.
Public reporting obligations that document how often these systems are used.
Similarly, ALPR systems have attracted attention because they capture sensitive movement data. Policymakers are introducing rules that limit how long data can be stored and how it can be shared across jurisdictions.
3. Generative-AI in Police Reports: Emerging Rules and Real Questions
As generative AI becomes more common in public agencies, lawmakers are asking whether police should disclose when AI helps prepare official reports. Current legislation in several states shows growing interest, although only a few have introduced clear proposals so far. Common ideas in these proposals include:
Requiring officers to certify that they reviewed AI-generated content.
Retaining first drafts created by AI for auditing.
Disclosing in the final report that AI contributed to its creation.
While these changes are still developing, they demonstrate real momentum toward greater transparency in law enforcement documentation.
4. Federal Government Actions and the Push for Oversight
Federal discussions are moving toward national guidelines on AI in policing. These efforts focus on creating a responsible framework rather than banning tools outright. Themes emerging at the federal level include:
Defining acceptable uses of high-risk AI systems.
Promoting auditability and fairness protections.
Developing unified principles for facial recognition and predictive algorithms.
Although the federal government has not mandated AI disclosure in police reports, its guidance is shaping how states draft new regulations.
What Comes Next and How OSRS Can Help
AI will continue to transform law enforcement. Students, researchers, intelligence practitioners, cybersecurity professionals, and policymakers need clear guidance to navigate this shifting landscape. The latest updates show a strong movement toward transparency, accountability, and public trust. OSRS can support your organization by providing expert analysis, AI governance strategies, cybersecurity advisory services, and training programs that help agencies adopt these technologies responsibly.
If you found this article helpful, please share it with your network and subscribe to our OSRS email list. Enjoyed this article? Stay informed by following us on Google News, Twitter, and LinkedIn for more exclusive cybersecurity insights and expert analyses.
About the Author
Dr. Oludare Ogunlana is a cybersecurity professor and founder of OGUN Security Research and Strategic Consulting LLC. He specializes in cyber policy, AI governance, intelligence analysis, and national security advisory services for public and private sector organizations.




Comments