FB pixel

Transparency needed for risks, privacy policies of generative AI tools, report says

Categories Biometric R&D  |  Biometrics News
Transparency needed for risks, privacy policies of generative AI tools, report says
 

A new report from the New America Foundation, Translating the Artificial: Universal Design for Communicating Uses, Harms, and Policies for Generative AI Tools, says the need for transparency about the potential benefits, risks, and data privacy policies associated with generative AI tools is needed now more than ever.

With the rapid advancement of generative AI, the report says there’s a growing concern about users not fully understanding the capabilities and limitations of these tools, which could lead to misuse or unintended consequences.

The report advocates for a standardized system that clearly explains to users of generative AI tools the benefits, risks, and data privacy policies associated with the tools, aiming to make these complex technologies more accessible and understandable for the public, essentially “translating” the technical aspects of AI into easily digestible information.

This concept proposes a standardized labeling system that would be applied consistently across different generative AI tools, allowing users to easily compare and understand the potential impacts of each tool. The goal is better informed decision-making. The report says that by providing clear and accessible information, users would be better equipped to make informed decisions regarding when and how to use generative AI tools.

“Generative AI tools have … fomented a steady stream of misinformation, disinformation, and mal-information, causing an uptick of socioeconomic issues within the realms of labor, national security, and data privacy. An explainability tool to combat the lack of transparency and understandability when engaging with a generative AI tool is needed,” the report argues.

The report is authored by New America Foundation researcher Gabrielle Hibbert, who is often credited with developing the “Translating the Artificial” concept and proposing a potential labeling system called “Simplified Algorithms for User Learning” (SAUL).

Hibbert’s research seeks to showcase SAUL as an explainability tool that was developed using universal design features. SAUL displays three sections of information: tool functionality, potential harms of use, and data protection policies.

Hibbert’s report says that “with the rise of generative AI tools, anxieties about the future of work, an individual’s likeness and privacy, and election security have increased. However, users are often not provided with the information needed to understand the how and why – or transparency and explainability – behind the algorithms that power their favorite applications.”

Currently, “explaining generative AI tools to the general public is confined to algorithmic decision systems in social media feeds and privacy labels, but there are still substantial gaps in explaining appropriate uses, potential harms, and available data privacy protections,” Hibbert said, noting that “while explainability efforts focused on social media feeds have increased awareness about the field of algorithmic explainability, there is still a deficit of tools available for consumer audiences.”

Hibbert says that “with the rise of generative AI tools, anxieties about the future of work, an individual’s likeness and privacy, and election security have increased,” but that “users are often not provided with the information needed to understand the how and why – or transparency and explainability – behind the algorithms that power their favorite applications.”

“Consumers’ preferences, opinions, and thoughts on the design and features of emerging technologies are often neglected … [and] attempts toward greater explainability and accessibility are focused on the perspective of engineers and data scientists, such as with Mozilla’s AI Guide or the Data Nutrition

Project’s Data Labels,” Hibbert said, adding that “although these industry strides are needed, they often come at the expense of developing accessible tools for the consumer.”

In addition, Hibbert said her study found that “consumer explainability tools focus on a specific product’s underlying feature, such as with Meta’s System Cards, an explainability tool aimed at helping consumers better understand their feed,” and that “exercises in providing choice to consumers largely centered on data broker deletion apps such as Permission Slip or Delete Me.”

“Although these tools are highly valuable,” Hibbert said, “they are not centered around explaining generative AI tools. An accessible, easy-to-read tool that can explain the generative AI tool in use and its data policies is needed to help bridge the ever widening explainability gap between tool and consumer.”

The report said interviewees “often felt a lack of ability to control where their private information was and who had access to it. When discussing comprehension of data policies, respondents noted a lack of understanding, citing incomprehensible legalese. Often, their lack of knowledge due to the legalese resulted in feeling a lack of choice regarding which companies would best protect their private information.”

Hibbert says 93 percent of those she interviewed “noted that they do not understand privacy policies and do not read them. While some blindly accept privacy policies, 33 percent of respondents reported trying to read portions of privacy policies.”

And for artists and teachers, “the question of consent and privacy loomed larger,” Hibbert found. “Of those in the artistic and teaching sectors, 100 percent questioned whether the work they created using generative AI tools would be used to further train the model or pass off their work as its own.”

Hibbert said an actor she interviewed “wrestled with how their voice could be used without their consent. Another interviewee hypothesized whether users would feel good knowing their work could be used to train AI models, referencing the lack of awareness of how these policies affect user data.”

“Almost all respondents (96 percent) echoed the sentiment of one interviewee who noted that they ‘don’t feel protected as a consumer,” Hibbert noted in her report, noting that when “interviewees were asked to define personal data, many respondents highlighted that personal data included the photos they share, their preferences, searches, and other traditionally understood components of personal information.”

The study says 91 percent of consumers (97 percent for those ages 18–34) skip over data and privacy policies because of a lack of accessibility, leaving consumers with a lack understanding of the tools they use.

“By not creating a transparent design system for communicating policies and appropriate uses for generative AI tools, global consumers will experience a wider digital divide, an eroded national security landscape, and continued propagation of Misinformation,” Hibbert concluded.

Hibbert argues that by implementing SAUL at scale it will “not only democratize access to information but could also build a foundation for a more consumer-led tech landscape where consumers have a voice.”

While Hibbert acknowledges that “there will never be a unified set of principles that all consumers agree on, she believes the “research conducted in this report sheds light on the fact that consumers want accessible information on the data policies of emerging tech tools such as generative AI tools. Often, consumer technology is built without the consent of consumers, with tech companies believing they know what is best for users. Utilizing the SAUL label or a similar label could help rebalance the share of power between tech companies and consumers.”

Related Posts

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Adoption of biometric payment cards plateaus with niche applications

Biometric payment cards, once seen to be the belle of the biometric ball, are mired in a rut of stagnated…

 

South Korea’s age assurance policies built on years of systemic, political change

A new paper from two scholars examines South Korea’s approach to age assurance. Published in TechPolicy.press, the paper contrasts global…

 

Zambia obtains World Bank funding support to advance DPI implementation

Zambia has secured funding to the tune of $120 million from the World Bank’s Digital Development Partnership to carry on…

 

Aadhaar enables an ‘epidemic’ of IDs in India

The Aadhaar ecosystem continues to grow, but it’s not all good news. The proliferation of IDs like the “One Nation,…

 

EU AI Act’s impact on businesses inspires simplification efforts

The European Union’s AI Act is already having a wide-reaching impact on business both inside and outside the economic bloc….

 

Chinese biometrics firms settle in Hong Kong for international market access

Chinese biometric recognition companies are eyeing Hong Kong as a springboard for expanding to foreign markets, according to company executives….

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events