Mobbeel: Should robots receive legal identity?

The dawn of autonomous AI agents has opened new questions on whether they should be granted a legal identity. The debate is complex as it entails the question of responsibility.
Despite the controversies, some companies believe that it’s just a matter of time before we will need an identity verification method for non-humans such as AI agents, chatbots and robots.
“Identity verification methods for AI agents would need to be specifically designed for them, as current systems are built to distinguish humans from machines – not one machine from another,” Rafael Campillo, chief marketing officer at identity verification firm Mobbeel Solutions.
Authenticating a machine would be a complex challenge requiring a multi-layered approach that would factor in how the AI was built and trained and whether its behavior aligns with expectations, Campillo explains in a recent blog post.
Each AI could have a unique “digital DNA” or an “algorithm fingerprint” that includes its neural network architecture and historical decision-making patterns recorded on a blockchain. An AI program that approves loans, for instance, could be identified by its risk-assessment patterns.
The second criterion would be metrics such as processing speed in critical tasks and statistical deviations in outputs. Campillo calls this “behavioral biometrics for AI.” One example would be an AI agent focused on trading shares that claims to make conservative investments but operates as high risk, meaning that its behavioral footprint would expose inconsistencies.
The AI agent would also have “dynamic authenticity certificates,” digital certificates that don’t just identify an AI but also track its evolution in real time. This would include NFTs, smart contracts and timestamping.
“When an AI is created, an NFT could be minted to reflect its ethical boundaries and purpose,” writes Campillo. “Unlike static certificates, this would update automatically via smart contracts – logging every major change, from new learning capabilities to unexpected deviations.”
While companies such as Mobbeel are coming up with possible paths toward legal identities for AI, governments are still debating how to approach the problem.
In the European Union, lawmakers proposed introducing “electronic personality” or “e-personality” to ensure robots that make autonomous decisions are responsible for the damage they may cause. In 2017, the European Parliament published a resolution that includes recommendations on Civil Law Rules on Robotics.
The question, however, remains unresolved. One part of the debate argues that AI agents and robots are ultimately tools and should not be considered legal entities as that would diminish the responsibility for their actions by the humans that created them. Another part claims that legal personhood for AI agents cannot be based on either the model of natural persons or legal entities.
“Undoubtedly, this remains a controversial topic with much debate still ahead,” says Campillo.
Article Topics
AI agents | digital identity | identity verification | Mobbeel | robots







Comments