top of page

AI in Policing: Laws, Risks, and How OSRS Can Help


AI in Policing
AI in Policing

Artificial Intelligence (AI) is transforming policing in the United States. From writing reports faster to tracking vehicles, AI is speeding up law enforcement work. It also raises concerns about privacy, fairness, and accuracy. At OGUN Security Research and Strategic Consulting LLC (OSRS), we help agencies adopt AI tools in a way that is safe, legal, and trusted.


How Police Use AI Today

AI is now part of daily police operations. A key example is AI-powered police report drafting. Tools like Axon’s Draft One convert body-camera audio into written reports. This reduces report preparation time from 45 minutes to less than 10 minutes. Officers can review, edit, and approve the draft before submission.

Other AI tools include:

  • Facial recognition to match images against criminal databases

  • Automated license plate readers (ALPR) to monitor and locate vehicles

  • Predictive policing to anticipate crime patterns

  • Gunshot detection systems to pinpoint firearm activity

These tools help police respond faster. But they also raise concerns about accuracy, bias, and the handling of sensitive data.


What the Law Says

AI laws for policing vary by state and city:

  • Utah requires disclosure when AI helps create a report.

  • Maine has the strictest facial recognition limits, allowing use only for serious crimes.

  • California restricts ALPR data sharing and requires privacy policies.

  • Washington mandates bias testing and human oversight for facial recognition.

  • Santa Cruz, CA, banned both predictive policing and police facial recognition.

  • New Orleans, L,A has strict oversight for facial recognition and other surveillance tools.

These rules aim to protect privacy, prevent bias, and maintain transparency.


Texas AI Law: New Rules and Impact on Police

Texas recently passed comprehensive AI legislation that takes effect in 2026. It sets clear limits on how public agencies, including police can use AI.


Key Texas Rules:

  • No social scoring or biometric identification without consent

  • A ban on harmful AI uses, such as promoting violence or self-harm

  • Mandatory AI disclosure when public agencies use AI tools

  • AI Council and regulatory sandbox for ethical testing before full deployment

Texas also made it a felony to create or distribute AI-generated child sexual abuse material, even if animated or drawn. The state continues using ALPR extensively, especially for immigration and abortion-related investigations, sparking civil liberties concerns.


How OSRS Can Support Agencies

OSRS offers tailored support to help law enforcement agencies use AI responsibly:

  • AI policy development that complies with state and city laws

  • Compliance audits to ensure AI tools meet legal and ethical standards

  • Training programs on ethical AI use, bias prevention, and data handling

  • Risk assessments to identify and address legal and reputational risks

  • Vendor reviews to ensure AI systems are accurate, transparent, and auditable

With OSRS, agencies can confidently adopt AI while maintaining compliance and community trust.


Frequently Asked Questions About AI in Policing

1. How is AI used in policing?

Police use AI for report writing, facial recognition, license plate tracking, predictive policing, and gunshot detection. These tools can help officers work faster and make decisions with more data.

2. What are the risks of AI in law enforcement?

AI can be wrong, biased, or used without enough oversight. Errors can lead to wrongful arrests or violations of privacy. Strong rules and human review help reduce these risks.

3. What does Texas law say about AI in policing?

Texas bans AI for social scoring and biometric identification without consent. Agencies must disclose when they use AI. The law also blocks harmful AI uses and creates an AI Council to set ethical standards.

4. Which states have the strictest AI rules for police?

Maine has the toughest rules on facial recognition. Utah requires disclosure for AI-written police reports. California has strong limits on sharing license plate reader data. Washington mandates bias testing for facial recognition.

5. How can police agencies follow AI laws?

Agencies can create clear AI policies, train staff on ethical use, and audit AI systems for accuracy and fairness. Working with experts like OSRS can ensure compliance and build community trust.

6. How can OSRS help with AI compliance?

OSRS provides AI policy development, compliance reviews, officer training, risk assessments, and vendor evaluations. Our team ensures agencies use AI tools legally, ethically, and effectively.

Final Thoughts

AI in policing can improve efficiency and enhance public safety. Without proper rules and oversight, it can also create legal and ethical challenges. Laws in states like Texas, Utah, and Maine are setting important standards. Agencies must act now to adapt.

Contact OSRS at www.ogunsecurity.com to get expert guidance on AI compliance, policy, and training.


---------

Dr. Sunday Oludare Ogunlana is a cybersecurity scholar-practitioner, professor, and founder of OGUN Security Research and Strategic Consulting LLC. He advises agencies and businesses on AI governance, cyber risk, and compliance, bridging technology with strategy to protect public trust.

Comments


bottom of page