London's Metropolitan Police has opened investigations into hundreds of its own officers following a week-long deployment of an artificial intelligence tool developed by Palantir, the controversial US data analytics company, according to reporting by The Guardian.
The software was used to monitor staff by drawing on data already held by the force, surfacing a range of alleged rule-breaking that spans relatively minor infractions - such as violations of work-from-home policies - through to more serious matters including suspected corruption.
Scope of the operation
The deployment was limited to a single week, yet generated enough flagged cases to result in investigations involving hundreds of personnel. The scale of the findings has drawn attention to both the effectiveness of AI-driven internal monitoring and the broader questions it raises about surveillance within police institutions.
Palantir, founded in 2003 with early backing from the CIA's venture capital arm, has long been a polarising presence in the technology sector. The company provides data integration and analytics platforms to governments, military agencies, and law enforcement bodies across the world. Its contracts with public institutions have repeatedly attracted scrutiny from civil liberties groups concerned about privacy and the scope of its data practices.

Context of Met misconduct
The Metropolitan Police has faced sustained pressure in recent years to address misconduct within its ranks. A series of high-profile cases - including the murder of Sarah Everard by a serving officer and subsequent reviews finding evidence of institutional failings - led to the force being placed into special measures by His Majesty's Inspectorate of Constabulary and Fire and Rescue Services in 2023.
Against that backdrop, the use of AI tools to identify internal wrongdoing represents a notable shift in how the force is approaching accountability. However, the use of such technology also raises procedural and ethical questions about how flagged cases are verified before formal investigations are launched, and what data is being analysed.
Reactions and scrutiny
The Guardian's report does not include a detailed response from the Metropolitan Police regarding the criteria used to initiate investigations or the current status of cases. Civil liberties advocates have previously raised concerns about Palantir's role in law enforcement contexts, arguing that algorithmic tools used in policing require robust independent oversight.
The Metropolitan Police has not publicly confirmed the full extent of the findings or how many of the flagged officers have faced formal disciplinary proceedings as a result of the AI-assisted review.





