April 21, 2021 (LifeSiteNews) — Amazon delivery drivers across the country face the prospect of losing their jobs if they refuse to consent to intrusive new biometrics technology inside their vans and trucks. The technology would capture and store personal information on a “driver account.”
75,000 drivers in the U.S. were asked by the tech giant to sign new contracts at the end of March that permit Amazon to use camera technology, powered by artificial intelligence (AI), to identify and store information about the driver: his face, location, movement, driving style, and even if the driver yawns or shows signs of drowsiness on-shift. Information collected is then shared with the dispatcher.
Failure to comply with the request for consent will result in the termination of that driver’s employment with Amazon — or the related third-party delivery service partner (DSP) which employs them — a copy of the “Vehicle Technology and Biometric Consent Agreement” obtained by Motherhood confirmed.
Amazon disclosed in the form that vehicles will be “video-monitored by cameras that are both internal and external and that operate while the ignition is on and for up to 20 minutes after the ignition is turned off.”
“Using your photograph, this Technology, may create Biometric Information, and collect, store, and use Biometric Information from such photographs.”
“This Technology tracks vehicle location and movement, including miles driven, speed, acceleration, braking, turns, and following distance … as a condition of delivery [sic] packages for Amazon, you consent to the use of Technology,” the form states.
The technology is being provided by Netradyne, a fleet management AI-technology start-up from San Diego. In a February announcement, reported by The Information, Amazon said the company’s four-lens “Driveri” camera would be installed in its delivery vehicles for “safety” reasons, as well as improving the “quality of the delivery experience.”
A presentation from Netradyne demonstrates the capabilities of the technology, including identifying a driver’s “seatbelt compliance” and “distraction” level, which ranges from using a cell-phone to simply “looking down.” Driving style is also closely monitored, with events like “hard acceleration” and stop sign violations being recorded and swiftly reported to dispatchers.
Deborah Bass, a spokeswoman for Amazon, stated that the decision to implement round-the-clock surveillance on their drivers was made “to help keep drivers and the communities where we deliver safe.”
Bass explained that Amazon previously “piloted the technology from April to October 2020 on over two million miles of delivery routes and the results produced remarkable driver and community safety improvements — accidents decreased 48 percent, stop sign violations decreased 20 percent, driving without a seatbelt decreased 60 percent, and distracted driving decreased 45 percent.”
“Don’t believe the self-interested critics who claim these cameras are intended for anything other than safety,” she added.
Eva Blum-Dumontet, Senior Research Officer at Privacy International, a U.K.-based charity dedicated to protecting privacy rights across the globe, mocked Bass’ contention that Amazon is “worried about road safety,” calling the notion “disingenuous.”
“The only thing they are concerned about here is their reputation and ensuring they can draw maximum profit from their drivers,” she said, adding that if Amazon “were truly concerned about road safety, the solution would be actually hiring employees and offering them enough protection so that they are not enticed to complete more tasks than it is safe to do so.”
In like manner, a number of employees (remaining nameless for fear of retaliation from Amazon) soon expressed concern that the company will use the countless hours of footage as “a punishment system,” likening the system to “Big Brother.”
Giving substance to driver concerns, the “biometric consent” form detailed that “Amazon may … use certain Technology that processes Biometric Information, including on-board safety camera technology which collects your photograph for the purposes of confirming your identity and connecting you to your driver account.”
An accompanying privacy policy relates that Amazon may then use that information “for employment purposes, including as part of an investigation of suspected misconduct or violation of safety or other DSP policies.”
One driver, Vic, quit his job delivering packages for Amazon in the Denver, Colorado, area after learning of the requirement to have AI-powered cameras constantly watch him while working, he told Reuters. “It was both a privacy violation, and a breach of trust … And I was not going to stand for it,” he said.
The installation of high-tech cameras is just the latest in a line of increasingly invasive biometric requirements imposed by Amazon, Vic said, explaining that drivers were already asked to install a monitoring app, Mentor, which logged a number of driving details.
“If we went over a bump, the phone would rattle, the Mentor app would log that I used the phone while driving, and boom, I’d get docked,” he said.
Biometrics technology, including facial recognition software, is becoming increasingly sophisticated, giving rise to new ethical concerns. In January, researchers at Stanford University, California, published a paper in which they claim it is possible to teach a computer to recognize a person’s political leanings, purely from scanning their face.
Using a collection of over one million images, freely taken from dating websites and from public Facebook profiles, the team claims the machine correctly predicted political orientation 72% of the time, which is “remarkably better than chance (50%), human accuracy (55%), or one afforded by a 100-item personality questionnaire (66%).”
Lead analyst on the team, Michal Kosinski, warned that it is supremely easy to obtain images through “ubiquitous CCTV cameras and giant databases of facial images.”
On account of this, the technology could be used for nefarious purposes, he noted, since “unlike many other biometric systems, facial recognition can be used without subjects’ consent or knowledge.”
The researchers added that “even a crude estimate of an audience’s psychological traits [based on facial recognition] can drastically boost the efficiency of mass persuasion. We hope that scholars, policymakers, engineers, and citizens will take notice.”