Are we human or are we agents: question proves ‘unexpectedly challenging’

In developing and defining generative AI, humankind has implicitly positioned itself against the emerging Other. For all the dreaming, computers cannot be humans, who are born and are made of mush and water and have to go to the bathroom all the time. At least until we start messing seriously with genetic code or create a cyborg on par with Robocop, a robot can never be a descendent of that first weird newt that crawled out of the primordial ooze to invent cheeseburgers, literature and sports fandom.
But, it can pretend. “Passing” is a term used when othered individuals – say, a hip-hop fan at a Phish concert – try to behave in such a way that enables them to more easily blend in. Jordan Peele’s acclaimed horror film Get Out explores the phenomenon through the lens of racism as a kind of psychological kidnapping.
In the case of AI – programs and algorithms we have crafted to simulate human decision-making and conversational patterns – “passing” has become a significant problem. Online, the AI agents acting on our command are increasingly indistinguishable from real people. That makes it more important to be able to know which ones are doing what a human wants them to, which are allowed to do what, and which have gone rogue or are supporting bad actors.
Hence the need for proof of personhood (PoP) technology: we need a path to verified trust for online transactions.
PoP ‘fundamental enabler for maintaining and increasing human agency’
World, the identity company owned by Sam Altman of OpenAI, has been peddling this idea for a couple of years now. The firm has faced frequent regulatory pushback to its methods for collecting the iris biometrics that underpin its World ID system. Now, on its blog, it has published a white paper that goes deep on its mission to provide “Private Proof of Human: Critical Infrastructure for Humanity in a World with Advanced AI.”
The paper, while couched in technical language, also serves as a continued argument for World as a tool for human empowerment. “A world without private Proof of Human (PoH) risks mass disinformation, election manipulation, scalable fraud and privacy-invasive tracking, all of which seriously threaten the stability of democracy and human agency,” it says. “At the same time, PoH protects freedom of speech by elevating human voices above bots and it empowers people through agents that are not blocked as bots but recognized to act on a human’s behalf.”
“Contrary to common perception, (well-implemented) PoH does not increase surveillance but protects against it because it preempts the need for privacy-invasive monitoring. It is a fundamental enabler for maintaining and increasing human agency, safeguarding public discourse, and robust benefits distribution (should they be needed) in a world with advanced AI.”
Proper PoP, says World, can not come from government digital ID, or FaceID. It will, ideally, come from World. However, it says, not to worry: locating the world’s identity network in the same wallet that houses the world’s biggest bot machine is not a “centralizing force.”
Indeed, “without PoH, influence concentrates among actors who leverage bots, coordinated networks, or purchased accounts. This centralizes power in the hands of those with resources to manufacture participation. PoH inverts this dynamic by making participation human-bounded, which prevents authentic voices from being drowned out and empowers individuals.”
The paper lays out, in considerable detail, what World wants to do and how it hopes to achieve it. However, it neglects to address the biggest problem with World’s pitch: the company claims loudly to be giving control back to individuals – but everything about the project oozes with a desire to rule the world: monolithic, evangelical, and born in the lap of a Silicon Valley billionaire.
SEDI is the right architecture for digital infrastructure: Windley
On the humbler end of the spectrum from World is Phil Windley’s Technometria, a personal tech blog by Phil Windley, Ph.D., author, and executive director of the IIW Foundation. In an article entitled “A Legal Identity Foundation Isn’t Optional,” Windley identifies a “proof gap” that occurs when modern verification systems force individuals to rely on institutions to prove facts about themselves – and says the answer is to be found in state-endorsed digital identity (SEDI).
The piece describes “a stack of capabilities required to close the proof gap: credential authenticity, legitimate issuers, trust registries, wallets, revocation, delegation, governance, and accountability. Each layer matters. None is sufficient by itself.”
Foundationally, however, Windley believes the solid public foundation required for real digital infrastructure cannot come from private-sector trust frameworks. Hence the need for a system like SEDI.
“SEDI is often described as a credentialing initiative, but its real significance is architectural,” he says. “It provides a publicly governed foundation for first-person digital trust. It gives people a durable, state-endorsed digital identity that can receive, hold, and present credentials across domains.”
“This does not replace institutional authority. Universities still issue degrees. Licensing boards still grant licenses. Employers still attest employment. Hospitals still issue records and treatment information. But SEDI gives those credentials a legally meaningful home in the hands of the person they describe”
For Windley, the burning questions are who governs the system, who has authority to issue and revoke, what rights people have, and what happens when the system fails.
“That is why SEDI matters so much. It does not compete with credential ecosystems. It underwrites them. It provides the legal and governance substrate that allows portable proof to become real infrastructure rather than a collection of disconnected technical projects.”
“If we want portable proof to work across markets, institutions, and agentic systems, then a publicly governed legal identity foundation is not an added feature. It is the base layer.”
OpenAI’s Agentic Commerce Protocol: like Amazon, but it talks
In his other guise as He Who Would Unleash the Bots, Sam Altman continues to push the agentic capabilities of OpenAI’s ChatGPT. The latest efforts focus on agentic commerce and ChatGPT’s new product discovery features.
A post on OpenAI’s blog explains how much richer and more satisfying the shopping experience can be when using the chatbot.
“Shopping on the web is easy if you already know what you want,” it says. “But when you’re still deciding, it often means jumping between tabs, reading the same ‘best of’ lists, and trying to piece together the right answer. ChatGPT solves that: figuring out what to buy.”
Chat decides for you using the Agentic Commerce Protocol (ACP), “the connective layer between merchants and users throughout discovery.”
“Through ACP, merchants share product feeds and promotions so their catalogs are fully represented in ChatGPT,” it says. “We support multiple delivery paths, including through third-party providers like Salesforce and Stripe, so merchants can participate with the systems they already use. Over time, ACP will serve as a foundation for broader AI-native commerce experiences, including personalization, local availability, and ETAs.”
Retailers including Target, Sephora, Nordstrom, Lowe’s, Best Buy, The Home Depot and Wayfair have already integrated into ACP for product discovery. As has Walmart, which will offer an in-Chat “tailored Walmart environment that supports account linking, loyalty and Walmart payments.”
OpenAI unites its agentic, commercial AI apps
OpenAI has plans to combine ChatGPT, Codex and Atlas browser into a single desktop app, consolidating its flagship tool suite. The so-called super app will combine the company’s consumer and agentic sides
Coverage in AI Magazine says the transformative move is to “improve quality, reduce duplication of effort and provide a more cohesive experience for users who currently juggle multiple apps for different tasks.”
However, Fidji Simo, chief of applications at OpenAI, recently told the Wall Street Journal that existing fragmentation has slowed the company down, “making it harder to hit the quality bar we want.”
Anthropic’s Claude recently hit number one on Apple’s App Store download chart.
Safety Bug Bounty expands scope of reportable harms
OpenAI has launched a public Safety Bug Bounty program focused on “identifying AI abuse and safety risks across our products,” according to a post.
The new program will complement OpenAI’s Security Bug Bounty, encompassing “issues that pose meaningful abuse and safety risks, even if they don’t meet the criteria for a security vulnerability.” Specific issues include agentic risks like third party prompt injection and data exfiltration or an agentic OpenAI product performing disallowed or harmful actions; and model generations that return proprietary information related to reasoning.
Per the blog, “submissions will be triaged by OpenAI’s Safety and Security Bug Bounty teams, and may be rerouted between the two programs depending on scope and ownership.”
Article Topics
agentic commerce | AI agents | biometrics | bug bounty | digital identity | digital trust | proof of personhood | World ID





Comments