## Predictive Policing on the Rails? NYC Explores AI to Detect “Trouble” in Subways
New York City’s Metropolitan Transportation Authority (MTA) is taking a futuristic approach to subway safety, exploring the use of artificial intelligence to predict and prevent crime and dangerous behavior on its platforms. The ambitious initiative aims to identify and flag potentially problematic situations *before* they escalate, potentially leading to intervention by security or law enforcement.
MTA chief security officer Michael Kemper revealed the agency’s exploration of AI systems for “predictive prevention” during a recent MTA safety committee meeting. He stated that the MTA is “studying and piloting technology like AI to sense potential trouble or problematic behavior on our subway platforms.” According to Kemper, AI could potentially identify individuals “acting out, irrational,” triggering an alert and prompt response from security or the police department, ideally “before waiting for something to happen.”
Kemper emphasized the potential of AI, stating “AI is the future,” and that the MTA is actively “working with tech companies literally right now” to determine viable solutions for the complex subway environment. While he remained tight-lipped on specific companies and implementation details, the very prospect of AI-powered surveillance raises significant questions about privacy and potential biases.
Adding clarity, MTA spokesperson Aaron Donovan assured *Gothamist* that the system would *not* utilize facial recognition. “The technology being explored by the MTA is designed to identify behaviors, not people,” Donovan clarified. This distinction attempts to address concerns about mass surveillance and the potential for misidentification, focusing instead on observable actions deemed indicative of potential risk.
This isn’t the MTA’s first foray into AI. In 2023, it was revealed that the agency was using AI-powered surveillance software to track fare evasion, monitoring when, where, and how it most frequently occurs. This prior implementation underscores the MTA’s willingness to embrace AI as a tool for managing the sprawling subway system.
However, the prospect of predicting crime based on behavior raises complex ethical and logistical challenges. Defining “problematic behavior” without resorting to subjective judgments and ensuring equitable application across diverse populations will be crucial. As the MTA moves forward with its AI-powered surveillance plans, transparency and public dialogue will be essential to ensure that safety enhancements don’t come at the cost of civil liberties.
Bir yanıt yazın