Workers had no cybersecurity help to protect the data from criminal or state interference, and were even instructed to do the work using new Microsoft accounts all with the same password, for ease of management, the former contractor said. Employee vetting was practically nonexistent, he added.
“There were no security measures, I don’t even remember them doing proper KYC [know your customer] on me. I think they just took my Chinese bank account details,” he told the Guardian. While the grader began by working in an office, he said the contractor that employed him “after a while allowed me to do it from home in Beijing. I judged British English (because I’m British), so I listened to people who had their Microsoft device set to British English, and I had access to all of this from my home laptop with a simple username and password login.” Both username and password were emailed to new contractors in plaintext, he said, with the former following a simple schema and the latter being the same for every employee who joined in any given year.
“They just give me a login over email and I will then have access to Cortana recordings. I could then hypothetically share this login with anyone,” the contractor said. “I heard all kinds of unusual conversations, including what could have been domestic violence. It sounds a bit crazy now, after educating myself on computer security, that they gave me the URL, a username and password sent over email.” As well as the risks of a rogue employee saving user data themselves or accessing voice recordings on a compromised laptop, Microsoft’s decision to outsource some of the work vetting English recordings to companies based in Beijing raises the additional prospect of the Chinese state gaining access to recordings. “Living in China, working in China, you’re already compromised with nearly everything,” the contractor said. “I never really thought about it.”
The grading programme, first reported by Vice in August, went a step further than those run by other technology multinationals such as Amazon, Apple and Google. As well as employing human graders to check audio from the company’s voice assistant, Microsoft contractors were also revealed to be listening to calls made using the Skype telephone service, which is owned by the company. An experimental feature enabling live text translation of Skype calls was vetted by humans, Vice revealed, raising the prospects of individuals making sensitive calls without knowing that their conversations were effectively bugged. Microsoft disclosed to users that the company might “analyse audio” of calls, but not that it would be playing them to human workers.
Vice reported at the time that the work was “at least in part work-at-home”. But the extent of the potential security hazards that represented has not been previously reported.Microsoft said in a statement that, since Vice’s reporting in the summer, it had ended its grading programmes for Skype and Cortana for Xbox and moved the rest of its human grading into “secure facilities” – none of which are in China. “We review short snippets of de-identified voice data from a small percentage of customers to help improve voice-enabled features, and we sometimes engage partner companies in this work,” the company said. “Review snippets are typically fewer than ten seconds long and no one reviewing these snippets would have access to longer conversations. We’ve always disclosed this to customers and operate to the highest privacy standards set out in laws like Europe’s GDPR.
“This past summer we carefully reviewed both the process we use and the communications with customers. As a result we updated our privacy statement to be even more clear about this work, and since then we’ve moved these reviews to secure facilities in a small number of countries. We will continue to take steps to give customers greater transparency and control over how we manage their data,” Microsoft added.