WEF pushes data intermediaries for privacy protection; pushing for details would be better
A new think piece promotes the idea of data intermediaries that would operate between individuals and any organization that covets their biometric and other personal data.
People would give one of these third parties their privacy and data-sharing preferences and data harvesters would have to go through the intermediary to access and use information according to the owner’s orders.
Today, of course, people have to choose data privacy settings for many real-world transactions, virtually all electronics they purchase and every single online site or service.
There also is much made of how everyone has a vital interest in controlling individuals’ data, but only businesses and governments are doing something about it, typically at the expense of the data’s owners. Individuals are doing — or just as often, can do — little to manage their own data.
The hope, according to the report’s authors, is a future (once infrastructure and standards exist) in which intermediaries store the data themselves, minimizing breach and misuse opportunities.
Negotiating access fees and misuse settlements would also be key services. The outfits might also do some processing such as anonymization, aggregation, and benchmarking, which could reduce exposure risks.
Naturally, there is a pitch to insert AI into the system, an idea that gives the intermediary idea currency for software investors.
Four of the report’s 46 pages are dedicated to the role of digital identity. The paper describes a current digital ID ecosystem based on user consent and “traditional intermediaries,” along with a potential shift towards user control through personal data stores, on-device storage and “more advanced data intermediaries.” In the future, WEF says, next-level data intermediaries can be embedded anywhere, again increasing the agency of digital ID users.
It all sounds encouraging, and indeed, any coherent concept that protects individuals’ biometric and other data, but too little is made of the significant hurdles and potential conflicts.
There is handwaving in the report about trusted third-party intermediaries, for instance, but how could a firm demonstrate trustworthiness? Many companies and government agencies have been working on that problem for years, to no avail.
Then there is Google, which congratulated itself for hiring two AI ethics researchers only to fire them.
What about conflicts of interest?
Would intermediaries be prohibited from quietly serving data buyers the way some real estate agencies can create the appearance of representing sellers while actually taking fees from buyers?
And how would data intermediaries be any more immune to hacking and misuse? The amount of proprietary and national security data on the dark web indicates that even information that people are prepared to fight over cannot be safeguarded.
Ultimately, even if intermediaries can prove viable, the entire industry remains under a pall of distrust, as demonstrated by the ongoing battles over what ethical facial recognition looks like.
Ideas like the Forum’s are good but they are no better than any others if trustworthiness is little more than an assumption rather than a fundamental building block.