Baltimore police officer Richard Pinheiro had his body camera rolling when he stumbled upon a soup can and found a small bag of white pills inside it. Pinheiro and two other officers then arrested a suspect, using the pills, their testimony and video footage from the body camera as evidence.

The problem for Pinheiro, as detailed in a September 26 article from The Atlantic, is that his body camera, an Axon Body 2, boasts a feature that saves 30 seconds of footage before someone activates the record button. 

That half minute of video showed the officer planting the pills before pressing record. Pinheiro was arrested and indicted by a grand jury on charges of “tampering with or fabricating physical evidence,” according to The Atlantic, which adds:

Baltimore attorneys would eventually drop more than a hundred cases involving Pinheiro and the other officers in the video, creating a months-long scandal that summer as officers admitted to “re-creating” evidence finds with their cameras. The then-commissioner Kevin Davis was forced to issue an internal memo to all officers forbidding the practice: “In the event your body worn camera is not activated during the recovery of evidence,” the memo reads, “under no circumstances shall you attempt to recreate the recovery of evidence after re-activating your body-worn camera.”

Given the vulnerability to tampering exhibited by the Axon Body 2 and other cameras, new body camera technology is being tested to ostensibly limit officers’ ability to create crime scenes and tackle the problem of officers failing to turn on their cameras before they use force. Per The Atlantic, the proposed tech includes cameras that would be auto-triggered by various stimuli. For example, video imaging company Digital Ally announced patents on cameras that record in “triggering” events like car crashes or guns being pulled from holsters. From the article:

But some of the auto-triggers would cause the cameras to record simply as police move through public spaces. As described in one patent, an officer could set his body camera to actively search for anyone with an active warrant. Using face recognition, the camera would scan the faces of the public, then compare that against a database of wanted people. (The process could work similarly for a missing person.) If there’s a match, the camera would begin to record automatically.

Facial recognition, however, comes with a vast list of potential pitfalls, according to Woodrow Hartzog, a Northeastern University professor interviewed by The Atlantic, who says:

“We should all be extremely skeptical of having it deployed in any wearable technology, particularly in contexts where the surveilled are so vulnerable, such as in many contexts involving law enforcement.”

The investigation of the Ferguson Police Department by the Department of Justice, in the aftermath of the shooting death of Michael Brown, found that Ferguson officers had concocted a bigoted revenue model that targeted Black drivers at much higher rates than non-Black drivers; officers penalized them with citations and search warrants for missed payments. As The Atlantic article points out, facial recognition technology would facilitate the arrest model that Ferguson PD engineered by making it a risk to simply walk out in public.

Technology, the article makes clear, can only go so far in improving police accountability.

Having cameras always on, always recording, may be a surveillance nightmare, but leaving it completely up to officer discretion, even with fail-safes, risks manipulation or misconduct. In certain spaces of social life, like airports, we’re willing to accept a forced lack of anonymity. But when we let police set the boundaries of permanent suspicion, we risk a world where going out in public is a transaction: We have to exchange anonymity for the right to be assumed innocent.

Read the full article over at The Atlantic.