Idenfy launches MCP server to bring live API docs into AI assistants

iDenfy has launched an official Model Context Protocol (MCP) server, which gives developers the ability to plug the company’s live technical documentation directly into AI assistants like ChatGPT, Claude, Gemini, Cursor and Perplexity.
MCP is an open standard introduced by Anthropic in late 2024 and now supported across the major AI ecosystems. It enables AI assistants to pull external data sources on demand.
By connecting to iDenfy’s MCP server, an AI assistant can read the company’s current API documentation before answering a question. This ensures that field names, endpoints and code examples match the live platform.
“Developers don’t want to read 50 pages of documentation to find out the name of one field,” said Domantas Ciulde, CEO of iDenfy. “They want to ask a question in the tool they’re already using — ChatGPT, Claude, Gemini — and get an answer that actually works. The MCP server makes that possible.”
“The assistant reads our real docs, so the code it writes uses our real API. No more guessing.”
AI assistants often guess parameter names or reference deprecated endpoints, forcing developers to switch back and forth between chat tools and browser tabs. But now developers can generate integration code or debug webhook issues using the MCP server. Compliance check requirements can be made entirely within their editor or AI interface.
The server is read‑only and serves only public documentation. It does not access customer data, API keys or verification results; nor does it perform API calls on behalf of developers. iDenfy says the design ensures that developers retain full control over what they share with their chosen AI assistant.
iDenfy’s MCP server is available now and is free for all developers.
Securing the interaction between systems and AI agents is still up in the air, with the nascent field having to consider the threat of rogue agents and other manipulations.
At the MCP Dev Summit 2026 in New York City, Gluu founder and CEO Michael Schwartz presented his vision for secure AI agent authorization. In it, Schwartz elaborated how authorization should move beyond role-based access control to policies that include context and complexity.
Meanwhile, Pindrop, in comments to NIST’s National Cybersecurity Center of Excellence, has argued that traditional security models based on one‑time login credentials break down once AI systems start acting on a user’s behalf. Trust now depends on whether the delegation was legitimate, whether the approval was genuine, and whether actions remain attributable and governed over time.
Pindrop is highlighting human approval integrity and active liveness detection. This means authorization events should incorporate real‑time signals involving audio and visual analysis, behavioral cues, device intelligence and contextual risk.
Pindrop stresses that AI trust is about understanding whether an agent is authorized for a specific action under current conditions, and whether the human approval chain is intact.
This shift has four practical implications for enterprise identity systems: treating human approvals as first‑class identity events; maintaining clear provenance chains linking approvals to agent actions; incorporating synthetic‑impersonation risk into assurance models; and making trust adaptive as context changes.
AI agent identity is now a governance and authorization challenge, Pindrop argues, as much as a technical one. As agentic systems become more autonomous, enterprises will need frameworks that verify not only the software component but also the authenticity of the human who approved the action, especially in real‑time human‑facing channels where deepfakes and spoofing are already reshaping risk.
More on Pindrop’s thoughts on NIST and AI agent identity can be found on the company’s blog here.
Article Topics
AI agents | authentication | authorization | digital identity | iDenfy | identity access management (IAM)






Comments