Google learned the hard way it’s better to be transparent about privacy bugs than cover them up
Not long ago, few companies would dream of having to come out publicly to tell customers their data had possibly been exposed but had not been stolen or misused. Disclosures of data incidents were generally saved only for the crimes that hit consumers directly in the wallet, like stolen credit card numbers or identities.
But The Wall Street Journal’s report Oct. 8 that Google may have tried to cover up a glitch that exposed the data of its Google+ social network customers shows how the wheel has turned. Google’s cover-up was meant to quell any potential calls for regulation over digital privacy, the Journal reported, and it shows how the routine, unreported privacy incidents of yesterday are increasingly getting time in the limelight.
Google eventually disclosed the bug in a blog post after the Journal’s report Oct. 8, saying a Google+ bug exposed personal data of up to 500,000 users.
Regulators could take notice now, as they did with Uber when the company revealed a security breach that it tried to cover up with large payments in the name of a “bug bounty” to hackers who found the data. The company has paid $148 million in settlements because of this incident, which was relatively minor except for the cover-up. The Federal Trade Commission will keep an eye on the car-hailing service for 20 years because of the matter.
Meanwhile, Facebook has faced questions about its data handling that, before the Cambridge Analytica Scandal, would have seemed like a distant problem. And Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg have had to testify to Congress about its practices handling user data.
The Google+ incident doesn’t seem particularly egregious. But it will face far more stringent oversight by both the public and possibly government agencies who view anything less than total transparency with greater skepticism than ever.
Google said this incident represented an “exposure” rather than a “breach” of data. This means that personal data was exposed for any bad guy to take, but there’s no evidence anyone did.
The company said private data in Google+ could have been viewed by third-party app developers, but there’s no evidence any of these individuals even knew about the bug that caused the vulnerability, let alone exploited it.
Facebook’s recent incident, on the other hand, was definitively a breach. The company said last week it knows that a malicious party was involved, and that up to 50 million accounts were compromised and possibly accessed by that party.
But exposure is actually common. “Exposure” of data is often discovered by security researchers or vendors who sell cybersecurity services and can use these discoveries as a marketing tool. Some other examples of these incidents include recent events at companies such as FedEx and Verizon.
The distinction still makes a difference in terms of the likely damage to consumers, with a breach having a much higher likelihood of hurting customers due to some criminal misuse of their data. But the difference is much less important now when it comes to a company’s reputation for managing and securing personal data. It could even increase the chances regulators will step in.
Ten years ago, companies could more easily understand the bright lines between a breach that is legally reportable to regulators and a bug that they can fix from the inside without telling anyone. They could contemplate this critical decision with lawyers and executives over the course of several weeks or months.
Google’s argument that it didn’t have to disclose the Google+ incident because it had fulfilled its legal obligations on the matter already is almost certainly accurate. But it proves the bar for disclosing has changed because of this public and government pressure on digital privacy.
And, like Google, companies may see far more reputational and regulatory risk in their social products. That means we may see more quick and definitive retirements of products with outsized risk like Google+.