Mask-Scanning Tech: A COVID-Mitigation Must or a Privacy Disaster?
Bennat BergerBennat Berger
Fall has arrived and brought with it an ever-pressing need for disease tracing and mitigation measures. Cooler weather heralds indoor gatherings, flu season, common cold outbreaks, and, potentially, a second wave of COVID-19. We face a difficult time; but will the surveillance measures we take to encourage protective behaviors and prevent the spread of disease ultimately expose us to even greater (privacy) risks?
Beyond implementing social distancing and testing measures, health authorities have pressed the general public to wear masks in public settings and when around people not in their household, especially when other social distancing measures (i.e., maintaining a six-foot distance) are difficult to maintain. Doing so has proven benefits; one study in Germany recently found that mask mandates lessened the growth of infections by about 40 percent.Â
“I think the biggest thing with COVID now that shapes all of this guidance on masks is that we can’t tell who’s infected,” Dr. Peter Chin-Hong, an infectious disease specialist at UC San Francisco, recently shared for the University’s news bulletin. “You can’t look in a crowd and say, oh, that person should wear a mask. There’s a lot of asymptomatic infection, so everybody has to wear a mask.”
To that end, some have posited the use of mask-scanning tech as a means to enforce mask-wearing compliance and limit disease spread. In September, National Geographic reported that the San Francisco tech company LeewayHertz had pioneered a mask recognition algorithm that could be used to identify non-compliance and facilitate enforcement efforts.Â
As reporters for the magazine wrote: “LeewayHertz’s algorithm [...] could be used in real time and integrated with closed-circuit television (CCTV) cameras. From a given frame in a video, it isolates images and organizes them into two categories, people who are wearing masks and those who are not.”Â
LeewayHertz’s mask-recognition software has been deployed in “stealth mode” in several settings across the United States and Europe. Several restaurants, hotels, and even one East Coast airport have used the algorithm to ensure that their staff members comply with mask-wearing policies.Â
The benefits of such technology are evident at a glance. LeewayHertz’s algorithm could lift the burden of identifying maskless shoppers and personnel and allow authorities to better use their time for targeted enforcement efforts. This tactic would empower public health authorities to enforce mask-wearing, limit noncompliance, and minimize disease spread in heavily-trafficked public spaces.Â
Of course, anyone remotely concerned with data privacy would also immediately wonder how invasive such technology could be. The answer? Not very -- at least, not yet.Â
The loophole is thus: mask recognition software doesn’t identify faces, only whether or not that face is covered. In fact, research indicates that masks can drastically limit the efficacy of facial recognition technology. According to one study by the US National Institute of Standards and Technology (NIST), masks cause the most-used facial recognition algorithms’ error rates to spike to between 5 percent and 50 percent.Â
Technically, mask recognition sidesteps the privacy issue quagmire by not identifying those it flags -- for now.Â
We find ourselves in an awkward spot. On the one hand, the idea of sending enforcers after non-compliant shoppers or staff when flagged by mask-recognition-empowered CCTV surveillance feels a little too close to an Orwellian dystopia for comfort. On the other, the sheer scale of the pandemic compels public health authorities to do what they can to limit the spread of potentially deadly diseases.Â
“There’s a willingness to relax the rules when it comes to anything related to COVID,” James Lewis, the director of the Technology Policy Program for the Center of Strategic and International Studies, recently told reporters. “The issue is, when this is over, will we go back?”
Lewis raises an important question, if only because while mask recognition does not currently identify faces, the capability is already undergoing research and development. In August, CNN Business reported that the California-based company Trueface is presently working on tailoring their facial recognition technology to focus on the upper (unmasked) part of the face. They hope that in doing so, the tech will be better able to identify a masked subject. As of the time of the CNN article’s publication, the company’s research team planned on rolling out its advancements within two months -- that is, around now.Â
With this in mind, it is possible to envision a world in which our already-deployed mask-recognition technology gains an identification capability. This is problematic, given previous attempts to ban some aspects of authority-deployed technology while keeping others.Â
In 2019, Wired reported that when San Francisco’s anti-surveillance laws and facial recognition ban were proposed, police officials for the city claimed that they had shelved all facial recognition testing as of 2017. What the authorities didn’t publicly mention, however, is that the police department had contracted with a facial recognition firm that same year to maintain a mug shot database, facial recognition software, and a facial recognition server through the summer of 2020.Â
After the ban took effect, the department rushed to dismantle the software; however, the notion that the city’s police force could deploy facial recognition technology without public oversight is troubling and stands as a concerning case study.Â
Of course, you could argue that mask-recognition tech lacks the privacy concerns that facial-recognition tech poses. Some cities already have made a case to this effect. This August, Portland, Oregon became the first US city to ban the public and private use of facial recognition -- however, according to National Geographic, “Hector Dominguez, the Smart City Open Data Coordinator for Portland, sees mask recognition as different from facial recognition with regards to its privacy risks.”
This argument orients mask recognition software as an exception to the facial recognition bans’ rule -- and does so with both merit and cause. After all, the technology does not currently pose privacy risks and could serve a valuable purpose in limiting disease spread via mask-wearing enforcement. However, it also creates a pattern of public tracking -- and our experiences in San Francisco and Oregon inform us that we may press the moral boundaries of such technology if it is made available.Â
Suppose we accept mask recognition software as a (temporary) means to identify noncompliance during COVID-19. In that case, it becomes easy to argue that applying newly-developed facial-recognition capabilities to that software would help public health authorities find and identify virus-exposed people during contact tracing efforts. It would be a logical, helpful move. However, at that point of acceptance, we establish a precedent -- intended or not -- of surveillance and tracking people “for their own good.”Â
The slippery slope very nearly speaks for itself. Mask-recognition software presents a short-term public health opportunity that could open the door to a long-term privacy nightmare. Our fears around COVID-19 are warranted and deserve addressing -- but the measures we take to protect ourselves shouldn’t expose us to privacy ills.Â
New Podcast Episode
Recent Articles