Give us a little more information and we'll give you a lot more relevant content
Your child's birthday or due date
Girl Boy Other Not Sure
Add A Child
Remove A Child
I don't have kids
Thanks For Subscribing!
Oops! Something went wrong. Please contact

Rite Aid Was Using Facial Recognition Cameras For Years — Until Now

Both the tech and its implementation were used to racially profile people.

Outside of any kind of real-world context, facial recognition tech is cool in a kind of sci-fi, we’re-living-in-the-future kind of way. But a Reuters report on the use of facial recognition at 200 Rite Aid stores — one of the largest implementations in the United States — shows that in context it’s indicative of a future no one interested in justice should be excited about.

Rite Aid used different vendors and equipment over the course of its facial recognition efforts, but it always worked the same way. Cameras captured customers’ faces and compared them to a database of images of faces belonging to people whom the company had “previously observed engaging in potential criminal activity.”

When a match was made, the software pinged a smartphone held by the store security agent. They would review the match and, if accurate, ask the person in question to leave.

Even if it worked perfectly and was implemented fairly, the system would be problematic. It rests on an assumption that someone who shoplifted years earlier deserves to be removed from a store today. It also gives corporations the ability to eject people from their stores based on information they control, which is also troubling.

And in practice, facial recognition at Rite Aid did not work perfectly and was not implemented fairly.

Ten security agents who talked to Reuters all agreed that the system regularly misidentified people, and there are indications that the software was less effective — and therefore more of a risk to — people of color.

Reuters found one Rite Aid customer, Tristan Jackson-Stankunas, who was asked to leave a California store by a security agent based on a match with a photo that he says, with the exception of the fact that it showed a black man, looked nothing like him.

“It doesn’t pick up Black people well,” a security agent at a Detroit Rite Aid said. “If your eyes are the same way, or if you’re wearing your headband like another person is wearing a headband, you’re going to get a hit.”

And even beyond the inadequacies of the software, Rite Aid’s implementation of facial recognition was disproportionately centered on communities of color, what those familiar called “tougher,” “toughest,” or “worst” areas. Fifty-two of the 65 stores in the company’s first big rollout of the tech were in areas where the largest group was Black or Latino.

For instance, a store on Manhattan’s white, wealthier Upper West Side did not get facial recognition while one in Harlem, just two miles away, did. This despite the fact that an internal review found both high-earning stores had an equal risk of loss.

For its part, Rite Aid insists that the facial recognition program “had nothing to do with race and was intended to deter theft and protect staff and customers from violence.” It also said that the program was discontinued amidst “a larger industry conversation.”

“Other large technology companies seem to be scaling back or rethinking their efforts around facial recognition given increasing uncertainty around the technology’s utility,” the company said, neglecting to comment on how its implementation and not just the tech itself might have contributed to the failure of the program.