FB pixel

India charts cautious path on AI regulation with Governance Guidelines

Modi calls for global AI compact
Categories Biometric R&D  |  Biometrics News
India charts cautious path on AI regulation with Governance Guidelines
 

India unveiled its AI Governance Guidelines as the country takes a measured approach to regulating artificial intelligence by emphasizing trust, equity and innovation.

It’s a different approach from the European Union and its AI Act although late-hour amendments simplified the bloc’s landmark digital regulatory framework — and its data privacy rules — in a bid to boost innovation.

The guidelines, released by the Ministry of Electronics and Information Technology (MeitY) after a consultation that drew more than 2,500 submissions from government bodies, academia, think tanks, and the private sector, reflect the vision of “AI for All” laid out by Prime Minister Narendra Modi.

On Sunday, Modi called on world leaders at the G20 summit being held in Johannesburg, South Africa, to back the creation of a global compact on artificial intelligence. The Indian leader urged collective action to prevent misuse while ensuring AI is harnessed for inclusive and responsible development.

“We all have to ensure that AI is used for the global good and its misuse is avoided,” he said, as reported by The New Indian Express. “For this, we need to create a global compact on AI that is based on some core principles.” Modi highlighted the importance of oversight, safety-by-design, transparency and sanctions on the use of AI in deepfakes, crime and terrorism.

The Prime Minister stressed that AI systems impacting human life, security or public trust must remain accountable. “Most importantly, AI should enhance human capabilities, but the ultimate responsibility for decision-making will always remain with humans,” he added.

Modi’s remarks echo the published Guidelines, which stresses that AI must serve as an enabler of inclusive development, from improving rural healthcare diagnostics and personalized education in local languages to enhancing climate resilience for farmers.

Recognizing both the promise and perils of AI, the framework seeks to balance innovation with accountability. It outlines principles of trust, fairness, equity, transparency, safety, resilience and sustainability, while encouraging responsible innovation over excessive restraint.

Rather than proposing new laws, the panel recommends leveraging existing legislation such as the IT Act and the Digital Personal Data Protection Act, with targeted amendments where necessary.

The guidelines also call for new institutions to strengthen governance, including an AI Governance Group (AIGG) to coordinate policy and an AI Safety Institute (AISI) to conduct technical evaluations and risk assessments. They propose graded liability systems, grievance redressal mechanisms, transparency reporting, and broader access to data and computing resources through the IndiaAI Mission, alongside integration with the country’s Digital Public Infrastructure.

India’s approach contrasts with the European Union’s AI Act, which imposes legally binding obligations, strict risk classifications, and penalties for non-compliance. While the EU framework requires rigorous pre-market testing and oversight for high-risk systems, India’s guidelines adopt a lighter, more flexible model.

Modi has announced that India will host the AI Impact Summit in February, themed Sarvajan Hitay, Sarvajan Sukhay (“welfare for all, happiness for all”), and invited G20 nations to participate.

Realistic mandate key to India’s new AI framework, policy expert argues

In an editorial for Policy Edge, Sumeysh Srivastava argues that harms from AI — such as bias, misinformation and deepfakes — cannot be eliminated outright. However, The Quantum Hub partner, who focuses on law and policy, writes that it can be mitigated through a combination of prevention, accountability and adaptive risk management.

Preventive measures like responsible platform design, better data practices and watermarking reduce baseline risks. The AI Governance Guidelines 2025 embody an integrated approach by mapping existing laws (IT Act, DPDP Act, BNS) to AI-related harms, embedding graded liability, and building feedback loops through reporting and sandboxes.

This phased model, which balances harm reduction, proportionate responsibility and gradual rule maturity, fits India’s developmental context, Srivastava argues. Success, however, depends on building institutional capacity, clear enforcement and maintaining pathways from voluntary to mandatory compliance.

A central institution in this framework is the proposed AI Safety Institute (AISI), tasked with developing standards that reflect India’s socio-cultural diversity. Srivastava highlights the challenge: testing AI models for bias across 22 languages, varied socio-economic conditions, caste dynamics, and regional contexts is a scale of complexity few regulators have attempted.

The biggest hurdles will be creating representative datasets, resourcing evaluations and ensuring ongoing monitoring. Pragmatic solutions include hub-and-spoke partnerships with academia and regulators, public-private testbeds, and leveraging existing infrastructure like Bhashini and AIKosh.

Durable funding and institutional independence are essential, and AISI’s mandate must have realistic scope, beginning with high-impact sectors such as finance or critical infrastructure.

On ethics, Srivastava suggests India should establish explicit principles (such as child safety, non-discrimination, accountability, autonomy) without rigid rules that risk stifling innovation. Governance tools should operationalize these principles through obligations like dataset documentation, explainability, grievance redress, and safety evaluations, supported by sandboxes and certifications. Some principles, particularly child safety and non-discrimination, require immediate protections, while others can remain flexible and evolve.

Drawing on India’s Digital Public Infrastructure model, safeguards should be embedded in design rather than relying solely on procedural compliance. Over time, evidence from audits and incident databases can justify proportionate mandates in high-risk domains. Srivastava concludes that India’s challenge is sequencing ethical clarity with adaptive governance — acting early enough to prevent harms without prematurely constraining innovation.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Face biometrics use cases outnumbered only by important considerations

With face biometrics now used regularly in many different sectors and areas of life, stakeholders are asking questions about a…

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events