"It has an obvious racial bias and that's dangerous," Arroyo said ahead of the hearing. "But it also has sort of a chilling effect on civil liberties. And so, in a time where we're seeing so much direct action in the form of marches and protests for rights, any kind of surveillance technology that could be used to essentially chill free speech or ... more or less monitor activism or activists is dangerous."During Wednesday's meeting and before the vote, Wu said that Boston shouldn't be using racially discriminatory technology. She noted the reports of the first known case of a man arrested after being misidentified by facial recognition technology in Michigan.
“We’re working to end systemic racism," Wu said. "So ending the ... over-surveillance of communities of color needs to be a part of that, and we’re just truly standing with the values that public safety and public health must be grounded in trust.”During a hearing earlier this month, Boston Police Commissioner William Gross said the current technology isn't reliable, and that it isn't used by the department.
"Until this technology is 100%, I'm not interested in it," he said.“I didn’t forget that I'm African American and I can be misidentified as well," he added.
While the police department isn't using facial recognition technology now, an upgraded version of a video analysis software currently used by the department, called BriefCam, does have facial analysis power. Boston police said at a recent city council working session that it would not sign up for that part of the software update.
San Francisco just became the first city in the nation to ban the use of facial recognition technology by police and government agencies. Rosenberg does worry about an uptick in neighborhood surveillance, she said, but is pleased that the bill will stop facial recognition technology from misidentifying people as criminal suspects in real-time policing.
It's not clear whether the department has upgraded to the newest version of BriefCam or if the city has a current contract with the company. Boston police did not yet respond to questions about the contract.
Wu said government often chases new technologies and tries to put in regulations after the fact — from ride-hailing to home-sharing. She said in this case, the disproportionate impact on people of color makes acting now even more important.Councilor Kenzie Bok said during the council meeting that just because a technology is possible doesn't mean it should be used.
“We really have a tendency in this country to let our technology go ahead of our common sense about how we want to live together," she said. "And that’s why this to me is such a critical intervention for the council to be making in this moment.”
Boston is now the second-largest city in the world to ban facial recognition technology, behind San Francisco. Five other Massachusetts communities have a ban: Somerville, Brookline, Northampton, Springfield and Cambridge.The Massachusetts chapter of the American Civil Liberties Union pushed for the bans in those places, and is lobbying state lawmakers to act. There is no statewide ban, though a bill that would put a moratorium on face recognition systems is currently pending before the joint judiciary committee. The Boston ordinance would not affect private companies or federal agencies, like the FBI, from using the technology.Kade Crockford, with the ACLU, said the state should act now to prevent harm down the line."Let's just ensure that we put the policy horse before the technology cart and lead with our values so we don't accidentally wake up someday in a dystopian surveillance state," Crockford said, "because behind the scenes, police departments and technology companies have created an architecture of oppression that is very difficult to dismantle."
The Boston city council ordinance notes governments around the world have responded to the COVID-19 pandemic with "an unprecedented use of surveillance tools" despite needing the public trust to effectively respond to the crisis.