Well-intentioned government privacy wonks need assertive attitude to win
Data protection officials in governments around the world have met every year since 1979 to coordinate their mission to safeguard citizens’ personal information. But the result has been scattershot and passive even for flashpoint topics like biometrics.
And there is one topic, one that almost certainly would enable large-scale bureaucratic successes, that has been ignored until very recently.
Stepping back, it is impressive that some world governments have made privacy and data protection a focus for a generation. And yet, who outside of their organization, the Global Privacy Assembly, knows their offices exist?
The Assembly has targeted important concerns, but typically as one-year efforts, with no follow-up on success and little in the way of follow-on projects. Sometimes, members have misjudged the “best by” date of an issue.
At its 2008 annual gathering, the Assembly discussed the “urgent need” to protect citizens’ privacy in a “borderless world.” It was a noble mission begun four years after Italy, an early border-closer, began formalizing plans to rebuff the flow of refugees reaching its shores.
The realistic expectation of a borderless world deteriorated every year since until COVID-19 crushed it outright.
Indeed, Assembly members, which then as now did not include China or Russia, tend to build the agendas of their annual gatherings with one or two hot-button issues along with recurring — and essential — boilerplate like standards and global cooperation.
Some agenda items are almost prescient, like the use of facial recognition systems with government IDs, discussed way back in 2005.
This year, Assembly members took up privacy during the pandemic. They recommended everyone involved in creating contact tracing programs practice privacy by design (an agenda item in 2010).
A sensible sentiment like that could make tracing more politically acceptable. It also could have lasting impact by making privacy by design itself boilerplate within governments.
The group also agreed to define the greatest risks to data posed by facial recognition. From there, members are committed to finding “principles and expectations” for ethical use of an individual’s data in face scanning policies.
And in a year when international aid groups have been touting deployment of biometric systems to help refugees find safety, the Assembly asked organizations to be systematic in how they develop field programs that also protect data and privacy.
A good summation of all this year’s recommendations has been posted by vendor OneTrust’s DataGuidance.
Still, it would be easy to overlook the joint statement on “emerging global issues.” It nonetheless deals with a central shortcoming in the group’s messaging. The item says the 41-year-old Assembly will strive to have an “active voice” in policymaking.
It reads like someone asking, politely, for a seat at the table. But the role of AI and the treatment of private information in AI is already a potent topic for communities around the world, including across the United States.
After climate change, the ownership of private information will be the defining subject for most nations, developed and developing, in the next decade.
Instead of calling, as it does, for “reminders of the policy issues” by members, it would benefit all stakeholders for privacy bureaucrats to look to the commerce or energy departments of governments for examples of assertive government participation.
AI | biometric data | biometrics | data protection | facial recognition | privacy | Privacy by Design | standards