India's AI Regulation Framework 2026: MeitY Guidelines for Tech Companies

Dhanush Prabha
9 min read 82K views
Reviewed by CAs & Legal Experts: Nebin Binoy & Ashwin Raghu
Last Updated: 

India is building its artificial intelligence regulatory framework not through a single sweeping law but through a layered architecture of MeitY advisories, the Digital Personal Data Protection Act, IT Act provisions, and sector-specific regulations. For tech companies operating AI platforms, deploying generative AI models, or processing Indian user data through machine learning systems, the compliance landscape in 2026 is defined by real obligations with real consequences. MeitY's March 2024 advisories established self-certification and labelling requirements for AI platforms. The DPDP Act, 2023 imposes data processing obligations with penalties up to ₹250 crore. The IT Intermediary Rules govern platform liability. And sector regulators from RBI to SEBI have layered on AI-specific requirements for their domains. This guide maps every regulatory obligation that AI companies in India must navigate in 2026, from entity registration and data protection compliance to content labelling, algorithmic transparency, and intellectual property protection.

  • India regulates AI through MeitY advisories + DPDP Act + IT Act + sector rules, not a standalone AI law
  • MeitY's March 2024 revised advisory requires self-certification and AI content labelling for all AI platforms
  • The DPDP Act, 2023 imposes consent, purpose limitation, and security obligations on AI data processing with penalties up to ₹250 crore
  • AI platforms may lose IT Act Section 79 safe harbour if they generate content rather than host it
  • The IndiaAI Mission (₹10,372 crore) creates subsidized compute access and responsible AI benchmarks
  • No AI-specific licence exists; companies register as standard entities and comply with applicable sector regulations
  • CERT-In mandates 6-hour cybersecurity incident reporting for AI platforms

India's AI Regulatory Architecture: How the Framework Works

Unlike the European Union's AI Act, which creates a single legislative framework with risk-based classification, India's AI governance operates as a multi-layered regulatory stack. Each layer addresses a different dimension of AI deployment, and tech companies must comply with all applicable layers simultaneously.

The first layer is MeitY's advisory framework, issued under the authority of the IT Act, 2000. These advisories set principles and operational requirements for AI platforms, including content labelling, bias prevention, and self-certification. While advisories are not primary legislation, non-compliance can trigger enforcement under the IT Intermediary Rules and lead to loss of safe harbour protection.

The second layer is the Digital Personal Data Protection Act, 2023, which regulates how AI companies collect, process, store, and transfer personal data. Every AI model trained on Indian user data or processing personal data of Indian users is subject to DPDP Act obligations, regardless of where the company is incorporated.

The third layer comprises sector-specific regulations from bodies like RBI (financial AI), SEBI (algorithmic trading and robo-advisory), IRDAI (insurance AI), and TRAI (telecom AI). These regulators impose additional requirements on top of MeitY's general framework.

The fourth layer is the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which govern platform obligations including content moderation, grievance redressal, and compliance officer appointments. AI platforms classified as intermediaries must comply with these rules to retain safe harbour protection under Section 79 of the IT Act.

An AI platform that generates original content (like a generative AI chatbot or image generator) may not qualify as an intermediary under Section 2(1)(w) of the IT Act, which defines intermediaries as entities that receive, store, or transmit data on behalf of others. If classified as a content publisher rather than an intermediary, the platform loses Section 79 safe harbour and becomes directly liable for all outputs. This classification question is the most significant unresolved legal issue in India's AI regulation.

MeitY AI Advisories: Timeline and Compliance Requirements

MeitY's engagement with AI regulation has evolved rapidly. Understanding the timeline is critical for compliance teams assessing their obligations.

MeitY AI Regulatory Timeline: Key Milestones
Date Development Impact on Tech Companies
June 2018 NITI Aayog publishes National Strategy for AI (#AIforAll) Establishes five priority sectors: healthcare, agriculture, education, smart cities, transportation
February 2021 NITI Aayog releases Responsible AI Part 1 (Principles) Defines seven responsible AI principles; no legal enforcement
August 2021 NITI Aayog releases Responsible AI Part 2 (Operationalizing Principles) Provides implementation guidance for responsible AI; referenced in government procurement
February 2023 IT Amendment Rules require social media to address AI-generated misinformation Platforms must use technology-based measures to identify and flag AI-generated content
August 2023 Digital Personal Data Protection Act, 2023 enacted Comprehensive data protection obligations apply to all AI data processing
March 1, 2024 MeitY Advisory: Government approval required for deploying untested AI models Initial requirement for pre-deployment government permission; created industry concern
March 15, 2024 MeitY Revised Advisory: Self-certification replaces government approval Platforms must self-certify compliance, label AI content, prevent unlawful outputs
March 2024 IndiaAI Mission launched with ₹10,372 crore allocation Subsidized GPU compute, AI datasets, responsible AI standards for startups
2025-2026 DPDP Act rules and Data Protection Board operationalization Specific compliance timelines, consent mechanisms, and penalty enforcement begin

The most operationally significant development is MeitY's March 15, 2024 revised advisory, which replaced the controversial government approval requirement with a self-certification model. Under this advisory, every AI platform accessible to Indian users must ensure three things: the AI model does not generate content that violates Indian law (including hate speech, defamation, and obscenity provisions), all AI-generated outputs carry appropriate labels or identifiers, and the platform takes reasonable measures to prevent algorithmic bias.

Register Your AI Company in India

Private Limited Company registration is the recommended structure for AI startups. IncorpX handles complete incorporation with MOA/AOA drafting, PAN, TAN, GST, and compliance setup.

Start Company Registration

Digital Personal Data Protection Act: AI-Specific Obligations

The DPDP Act, 2023 is the most consequential legislation affecting AI companies in India. While not AI-specific, its data processing framework directly governs how AI models collect training data, process user inputs, generate outputs, and handle personal information.

Under Sections 5 and 6 of the DPDP Act, AI companies must obtain free, specific, informed, unconditional, and unambiguous consent from data principals before processing their personal data. The consent must specify the exact purpose of data processing. For AI companies, this means:

  • Training data collection: If an AI company scrapes or collects personal data to train models, it must obtain consent for this specific purpose or demonstrate a legitimate use exemption
  • User input processing: When users interact with AI platforms, the platform must disclose that inputs may be processed, stored, or used for model improvement
  • Output generation: AI platforms generating outputs that contain personal data of third parties must have a lawful basis for such processing
  • Automated decision-making: While the DPDP Act does not explicitly address automated decision-making rights (unlike GDPR Article 22), the purpose limitation and transparency requirements apply to AI-driven decisions

Data Principal Rights and AI Models

The DPDP Act grants data principals the right to correction and erasure of personal data under Section 12. For AI companies, this creates a technical challenge: personal data embedded in trained AI model weights cannot be surgically removed without retraining or applying machine unlearning techniques. Companies must implement processes to address erasure requests and document their technical approach to data deletion from AI systems.

Children's Data and AI

Section 9 of the DPDP Act imposes stricter requirements for processing children's data (persons under 18). AI platforms accessible to children must obtain verifiable parental consent, must not process children's data for behavioural monitoring, tracking, or targeted advertising, and must not deploy AI systems that could cause detrimental effects on children's well-being. AI edtech platforms, gaming AI, and social media AI features are directly affected.

Non-compliance with DPDP Act provisions carries significant financial penalties determined by the Data Protection Board: up to ₹250 crore for failure to implement reasonable security safeguards leading to a data breach, up to ₹200 crore for non-compliance with data principal rights or processing obligations, and up to ₹150 crore for failure to notify the Board of data breaches. These penalties apply per instance and are not capped at an aggregate level in the Act.

Self-Certification Framework: What AI Companies Must Do

MeitY's self-certification approach places the compliance burden on AI companies to proactively verify and document their adherence to responsible AI principles. Unlike the EU AI Act's third-party conformity assessment for high-risk systems, India's model requires companies to self-attest compliance and maintain auditable documentation.

The practical self-certification process involves the following steps:

  • Content safety assessment: Document that the AI model has been tested against Indian legal standards, including provisions of the Indian Penal Code (Bharatiya Nyaya Sanhita, 2023), IT Act Section 66A successor provisions, and defamation laws
  • Labelling implementation: Deploy technical systems to label all AI-generated outputs with persistent, visible identifiers. This includes metadata tagging, watermarking for visual content, and disclosure notices for text outputs
  • Bias audit documentation: Conduct and document algorithmic fairness assessments, testing for bias across protected categories including religion, caste, gender, and disability
  • Grievance mechanism: Establish a user complaint process for AI-related harms, aligned with IT Intermediary Rules requirements
  • Compliance record keeping: Maintain records of testing, labelling, bias audits, and complaint resolution for regulatory inspection

Create an internal AI Compliance Register that documents every AI model deployed, its training data sources, content safety testing results, labelling mechanisms, bias audit findings, and user complaint history. This register serves as the primary evidence of self-certification compliance during any regulatory inquiry or audit. Engage professional compliance services to structure this documentation framework from the outset.

India vs Global AI Regulation: Comparative Analysis

India's regulatory philosophy differs fundamentally from other major jurisdictions. Understanding these differences is essential for AI companies operating across borders or planning market entry into India.

AI Regulation Comparison: India vs EU vs USA vs China
Regulatory Dimension India (MeitY Framework) EU (AI Act) USA (Executive Order + State Laws) China (CAC Regulations)
Legislative approach Advisory-driven, multi-law framework Single comprehensive legislation Sectoral, executive orders, state-level laws Multiple targeted regulations (algorithm, deepfake, GenAI)
Risk classification No formal risk tiers 4-tier risk system (unacceptable to minimal) No federal risk classification Content-based classification
Conformity assessment Self-certification Third-party audit for high-risk AI Varies by sector and state Government security assessment for GenAI
Data protection nexus DPDP Act, 2023 GDPR (pre-existing) No federal data protection law PIPL (Personal Information Protection Law)
Content labelling Mandatory (MeitY advisory) Mandatory for certain AI systems No federal requirement Mandatory for all AI-generated content
Maximum penalties ₹250 crore under DPDP Act + IT Act provisions €35M or 7% global turnover Varies by sector Fines + service suspension + criminal liability
Innovation stance Pro-innovation, light-touch regulation Precautionary, compliance-heavy Innovation-first, voluntary commitments State-controlled innovation
Enforcement body MeitY + Data Protection Board + sector regulators AI Office + national authorities FTC + sector regulators + state AGs Cyberspace Administration of China (CAC)

India's comparative advantage for AI companies is its innovation-enabling approach. The self-certification model, absence of mandatory third-party audits, and lack of a formal risk classification system make India's compliance requirements less burdensome than the EU's. However, the DPDP Act's penalty structure is among the strictest globally, and the multi-regulator environment creates complexity for companies operating across sectors.

IT Act and Intermediary Rules: Platform Liability for AI

The Information Technology Act, 2000 and the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 form the primary legal infrastructure governing AI platform operations in India.

Safe Harbour Under Section 79

Section 79 of the IT Act provides intermediaries with immunity from liability for user-generated content, provided they act as passive conduits and comply with due diligence requirements. For AI companies, the critical question is whether a platform deploying generative AI qualifies as an intermediary. The legal distinction turns on whether the platform is hosting or transmitting content (intermediary) versus creating or generating content (publisher). A pure AI API service that processes user queries may retain intermediary status, while a consumer-facing generative AI chatbot that produces original content may not.

Due Diligence Obligations

AI platforms classified as intermediaries must comply with the following due diligence requirements under the IT Intermediary Rules:

  • Publish terms of service, privacy policy, and user agreement clearly accessible to all users
  • Appoint a Grievance Officer (resident in India) who acknowledges complaints within 24 hours and resolves them within 15 days
  • Appoint a Chief Compliance Officer and Nodal Contact Person (for Significant Social Media Intermediaries with 5 million+ users)
  • Remove or disable access to content flagged by government orders within 36 hours
  • Deploy technology-based measures to proactively identify and remove content depicting child sexual abuse material, content previously ordered to be removed, and misinformation identified through fact-checking mechanisms
  • Retain information removed or disabled for 180 days for investigation purposes

AI platforms with 5 million or more registered users in India are classified as Significant Social Media Intermediaries and face enhanced obligations including appointing a Chief Compliance Officer, a Nodal Contact Person, and a Resident Grievance Officer, all of whom must be resident in India. Monthly compliance reports must be published. AI chatbot platforms with large Indian user bases should plan for SSMI classification.

Sector-Specific AI Regulations

Beyond MeitY's horizontal framework, AI companies operating in regulated sectors face additional compliance requirements from domain-specific regulators.

Financial Services (RBI and SEBI)

The Reserve Bank of India has addressed AI in financial services through multiple circulars. AI-driven lending decisions and credit scoring models must provide explainability to borrowers, documenting the factors that influenced automated credit decisions. The RBI's Digital Lending Guidelines (2022) require that any AI-based lending platform must disclose the use of automated decision-making, provide the borrower with the key reasons for credit decisions, and ensure human oversight in the process.

SEBI regulates AI-driven algorithmic trading through its framework for algorithmic trading and co-location (circular SEBI/HO/MRD/DP/CIR/P/2018/73). AI trading algorithms must be approved by stock exchanges, tested in simulation environments before live deployment, and equipped with kill switches for emergency deactivation. Robo-advisory platforms must register as Investment Advisers under SEBI (Investment Advisers) Regulations, 2013.

Insurance (IRDAI)

IRDAI has issued guidelines on the use of AI in insurance underwriting, claims processing, and fraud detection. AI models used for risk assessment and premium calculation must not discriminate based on protected characteristics, and insurers must maintain explainability in automated claims decisions. The use of AI in health insurance underwriting is subject to additional scrutiny under IRDAI's Health Insurance Regulations.

Healthcare

AI medical devices and diagnostic tools fall under the Medical Devices Rules, 2017 as amended, and may require approval from the Central Drugs Standard Control Organisation (CDSCO). AI-based Software as a Medical Device (SaMD) is classified based on risk, and high-risk AI diagnostic tools require clinical validation and regulatory clearance before deployment.

IncorpX's compliance advisory team helps AI companies map their regulatory obligations across MeitY, RBI, SEBI, and sector-specific frameworks. Get a comprehensive compliance assessment.

Schedule a Compliance Review

AI Content Labelling and Deepfake Regulation

Content labelling is the most operationally immediate compliance requirement from MeitY's advisories. AI platforms must implement labelling across all content types:

  • Text content: Clear disclosure that the output was generated by AI, typically through a visible notice or prefix
  • Image content: Embedded watermarks or metadata tags identifying AI generation. C2PA (Coalition for Content Provenance and Authenticity) standard compliance is emerging as a best practice
  • Audio content: Watermarking and metadata tagging for AI-generated speech, music, or sound effects
  • Video content: Persistent visual identifiers and metadata for AI-generated or AI-manipulated video, including deepfakes

The deepfake dimension is particularly significant. MeitY's advisories, combined with IT Act provisions on impersonation (Section 66D) and the Bharatiya Nyaya Sanhita provisions on defamation, create liability for AI platforms that enable deepfake creation without adequate safeguards. Platforms offering AI video generation or face-swapping capabilities must implement consent verification for use of identifiable persons' likenesses and maintain audit trails of content creation.

An AI platform that enables creation of non-consensual deepfakes can face criminal liability under Section 66D of the IT Act (impersonation using computer resource, up to 3 years imprisonment), civil liability for defamation, and regulatory action under IT Intermediary Rules for failure to prevent harm. Implementing robust content moderation, consent verification, and output monitoring for deepfake misuse is not optional; it is a direct legal requirement.

IndiaAI Mission: Opportunities for AI Companies

The IndiaAI Mission, approved by the Cabinet in March 2024 with an allocation of ₹10,372 crore, is the government's most significant initiative to build India's AI ecosystem. For AI companies, it creates both opportunities and compliance reference points.

Key Components of the IndiaAI Mission

  • IndiaAI Compute Capacity: Building a network of 10,000+ GPU compute infrastructure through empanelled cloud service providers, available to startups, researchers, and government projects at subsidized rates
  • IndiaAI Innovation Centre: Development and deployment of foundational AI models, including large multimodal models trained on Indian languages and datasets
  • IndiaAI Datasets Platform: A unified data platform aggregating non-personal and anonymized datasets from government sources for AI training and research
  • IndiaAI Application Development: Funding for AI applications in agriculture, healthcare, education, and governance
  • IndiaAI FutureSkills: AI skilling programs to build talent capacity
  • IndiaAI Startup Financing: Financial support for AI startups through the AI startup ecosystem
  • Safe and Trusted AI: Development of responsible AI standards, testing frameworks, and certification mechanisms

The Safe and Trusted AI pillar is particularly relevant for compliance. The IndiaAI Mission is developing responsible AI benchmarks and testing frameworks that may evolve into formal compliance standards. AI companies that align with these emerging standards early will have a significant advantage when the framework matures into enforceable requirements.

To access IndiaAI Mission resources, companies should be registered in India (preferably as a Private Limited Company), hold DPIIT Startup India recognition for preferential access, and demonstrate alignment with the mission's responsible AI principles.

Compliance Checklist for AI Companies in India

Use this comprehensive checklist to assess your AI company's compliance status across all regulatory layers applicable in 2026.

AI Company Compliance Checklist: India 2026
Compliance Area Requirement Applicable Law/Regulation Priority
Entity registration Register as Private Limited Company or LLP Companies Act, 2013 / LLP Act, 2008 Immediate
DPIIT recognition Obtain Startup India recognition for tax benefits and scheme access DPIIT Notification (2019) High
Data protection registration Register with Data Protection Board as Data Fiduciary (when notified) DPDP Act, 2023 Mandatory
Consent mechanism Implement valid consent collection for personal data processing by AI models DPDP Act, 2023 (Sections 5-6) Mandatory
AI content labelling Label all AI-generated outputs with persistent, visible identifiers MeitY Advisory (March 2024) Mandatory
Self-certification Self-certify that AI models do not generate unlawful content MeitY Advisory (March 2024) Mandatory
Bias audit Conduct and document algorithmic fairness testing MeitY Advisory + NITI Aayog Responsible AI Principles High
Grievance officer Appoint India-resident Grievance Officer IT Intermediary Rules, 2021 Mandatory (if intermediary)
Cybersecurity reporting Report incidents to CERT-In within 6 hours CERT-In Directions (April 2022) Mandatory
Children's data safeguards Implement parental consent and restrict tracking for users under 18 DPDP Act, 2023 (Section 9) Mandatory (if applicable)
IP protection File patents for novel AI methods, register trademarks for AI products Patents Act, 1970 / Trade Marks Act, 1999 Recommended
Sector-specific compliance Comply with RBI, SEBI, IRDAI, or other sector rules if applicable Sector-specific regulations Mandatory (if applicable)

Get Your AI Startup Compliance-Ready

IncorpX offers end-to-end setup for AI companies: Private Limited Company registration, Startup India recognition, trademark filing, and ongoing compliance management.

Register Your AI Startup

Intellectual Property Protection for AI Companies

Protecting AI innovations through intellectual property mechanisms is critical for competitive advantage and investor confidence. India's IP framework offers multiple protection pathways for AI companies.

Patent Protection for AI Algorithms

The Patents Act, 1970 excludes "computer programmes per se" from patentability under Section 3(k). However, AI inventions that produce a technical effect or solve a technical problem beyond mere computation are patentable. The Indian Patent Office has granted patents for AI-based methods in image recognition, natural language processing, predictive maintenance, and drug discovery where the application demonstrates a concrete technical contribution. AI companies should work with IP professionals to frame patent applications around the technical effect of their algorithms rather than the algorithm itself.

Trade Secrets for Model Weights and Training Data

Proprietary AI model weights, training datasets, hyperparameter configurations, and training methodologies can be protected as trade secrets under Indian common law and contractual mechanisms. Implement robust confidentiality agreements with employees, contractors, and data partners. Restrict access to model weights and training infrastructure through technical controls. Document trade secret status and protection measures to establish legal standing in case of misappropriation.

Trademark Registration for AI Products

AI product names, logos, and distinctive branding elements should be protected through trademark registration under the Trade Marks Act, 1999. Register trademarks in relevant classes: Class 9 (software and computer programs), Class 35 (data processing services), Class 42 (software as a service, AI platform services), and any sector-specific classes applicable to the AI product's domain.

Data Processing and Cross-Border Transfer Rules for AI

AI companies routinely process data across borders, whether for cloud-based model training, inference serving through global CDNs, or cross-border collaboration. The DPDP Act, 2023 establishes the framework governing these transfers.

The Act adopts a negative list approach to cross-border data transfers: personal data can be transferred to any country except those specifically restricted by the Central Government through notification. As of 2026, no restricted country list has been published, meaning cross-border transfers are currently unrestricted under DPDP Act provisions.

However, AI companies must still comply with:

  • DPDP Act obligations: All data processing requirements (consent, purpose limitation, security) apply regardless of where data is stored or processed
  • RBI data localization: Payment system data must be stored exclusively in India (RBI circular on Storage of Payment System Data, 2018)
  • CERT-In requirements: Cybersecurity incident logs must be maintained in India for 180 days
  • Contractual obligations: Cross-border data processing agreements must ensure the overseas processor maintains equivalent data protection standards

The Central Government is expected to notify detailed rules under the DPDP Act that may introduce specific requirements for cross-border data transfers, significant data fiduciary obligations, and consent manager registration. AI companies should monitor MeitY notifications and prepare flexible data governance frameworks that can accommodate additional requirements when notified.

Building an AI Governance Framework: Practical Steps

For AI companies operating in India, compliance is not a one-time exercise but an ongoing governance function. Here is a practical framework for building internal AI governance.

Step 1: Appoint an AI Governance Lead

Designate a senior executive (CTO, Chief Compliance Officer, or a dedicated AI Ethics Officer) responsible for AI governance. This person should oversee compliance across all regulatory layers, coordinate with sector regulators, and serve as the primary point of contact for Data Protection Board inquiries.

Step 2: Map AI Systems and Data Flows

Create a comprehensive inventory of all AI models deployed, their training data sources, data processing pipelines, output channels, and user touchpoints. Map personal data flows through each AI system from collection to deletion. This inventory forms the foundation for DPDP Act compliance and MeitY self-certification.

Step 3: Implement Technical Safeguards

Deploy the technical infrastructure required for compliance: content labelling systems, consent management platforms, data encryption (at rest and in transit), access controls for model weights and training data, audit logging for AI decisions, and incident detection systems for CERT-In reporting.

Step 4: Conduct Regular Compliance Audits

Schedule quarterly internal audits covering: DPDP Act compliance (consent validity, purpose limitation, data principal rights), MeitY advisory compliance (labelling, self-certification, bias), IT Intermediary Rules compliance (grievance redressal, content moderation), and sector-specific obligations. Engage professional advisory services for annual external compliance reviews.

Step 5: Establish Incident Response Protocols

Create documented procedures for AI-related incidents: data breaches, model failures generating harmful content, deepfake misuse, algorithmic bias discoveries, and regulatory inquiries. Align response timelines with CERT-In's 6-hour reporting requirement and DPDP Act breach notification obligations.

Future of AI Regulation in India: What to Expect

India's AI regulatory framework is evolving rapidly. Several developments are expected to shape the compliance landscape in the near term.

DPDP Act rules notification: The Central Government will notify detailed rules under the DPDP Act specifying consent manager requirements, significant data fiduciary obligations, cross-border transfer restrictions, and Data Protection Board procedures. These rules will directly impact AI data processing compliance.

Dedicated AI legislation: MeitY has indicated that India may eventually enact a dedicated AI regulatory framework, potentially evolving from the current advisory approach. Any such legislation is likely to follow India's innovation-friendly philosophy while addressing high-risk AI applications in critical sectors.

IndiaAI Mission responsible AI standards: The Safe and Trusted AI pillar of the IndiaAI Mission is developing formal responsible AI testing benchmarks and certification frameworks. These may evolve from voluntary standards to recommended practices referenced in government procurement and regulatory guidance.

Global regulatory convergence: As the EU AI Act enters full enforcement (August 2025 for prohibited practices, August 2026 for high-risk systems), global AI companies will face pressure to adopt the most stringent standard across jurisdictions. India's framework may incorporate elements of risk-based classification while maintaining its lighter regulatory touch.

Sector-specific AI guidelines: RBI, SEBI, IRDAI, and TRAI are expected to issue more detailed AI-specific guidelines for their sectors, particularly around explainability, fairness, and consumer protection in automated decision-making.

AI companies that build modular compliance infrastructure adaptable to evolving regulations will outperform those that treat compliance as a static checklist. Design your data governance, content labelling, and audit systems to be configurable as new rules are notified. The investment in flexible compliance architecture today will significantly reduce the cost of adapting to India's maturing AI regulatory framework.

Common Compliance Mistakes AI Companies Make

Based on the current regulatory environment, these are the most frequent compliance failures observed among AI companies operating in India.

  • Ignoring DPDP Act applicability: Many AI companies assume the Act only applies to traditional data processors. Every AI company processing personal data of Indian users is covered, including companies incorporated outside India
  • Treating MeitY advisories as optional: While advisories are not primary legislation, non-compliance can trigger enforcement under IT Intermediary Rules and loss of safe harbour, which carries severe operational consequences
  • Inadequate AI content labelling: Implementing minimal or easily removable labels fails the MeitY advisory's requirement for persistent, conspicuous identification of AI-generated content
  • Missing CERT-In reporting deadlines: The 6-hour reporting window for cybersecurity incidents is extremely tight. AI companies without pre-established incident detection and reporting protocols routinely miss this deadline
  • Neglecting sector-specific requirements: AI companies in fintech, healthtech, or insurtech that comply with MeitY's general framework but ignore RBI, CDSCO, or IRDAI requirements face sector-specific enforcement actions
  • No IP protection strategy: Failing to file patents, register trademarks, or implement trade secret protections for AI innovations leaves companies vulnerable to competitive copying and weakens investor confidence
  • Underestimating children's data obligations: AI platforms accessible to users under 18 face the strictest DPDP Act requirements. Platforms that do not implement age verification and parental consent mechanisms are exposed to maximum penalties

Summary

India's AI regulation framework in 2026 is a layered, multi-regulator system built on MeitY advisories, the DPDP Act, IT Act provisions, and sector-specific rules. For tech companies, the compliance obligations are clear and enforceable. MeitY's March 2024 revised advisory mandates self-certification, AI content labelling, and bias prevention for all AI platforms serving Indian users. The DPDP Act, 2023 imposes comprehensive data processing obligations with penalties up to ₹250 crore. The IT Intermediary Rules govern platform liability, grievance redressal, and content moderation. Sector regulators add domain-specific requirements for AI in financial services, insurance, healthcare, and telecom.

India's approach is deliberately pro-innovation: no standalone AI law, no mandatory risk classification, no third-party conformity assessments, and self-certification rather than government pre-approval. The IndiaAI Mission's ₹10,372 crore investment in compute infrastructure, foundational models, and responsible AI standards signals the government's commitment to building the AI ecosystem while developing governance standards.

For AI companies planning to operate in India, the action plan is straightforward: register your entity, obtain Startup India recognition for tax benefits and scheme access, implement DPDP Act compliance from day one, build self-certification documentation per MeitY advisories, protect IP through patents and trademarks, and establish ongoing compliance management. The regulatory environment rewards companies that build compliance infrastructure proactively rather than reactively. India's AI regulation is not a barrier to innovation; it is a framework that responsible companies can navigate with proper planning and professional guidance.

Launch and Scale Your AI Company in India with IncorpX

From company incorporation and Startup India registration to IP protection, DPDP Act compliance, and ongoing regulatory management, IncorpX provides end-to-end support for AI companies navigating India's regulatory framework.

Frequently Asked Questions

What is India's current approach to AI regulation in 2026?
India follows a principles-based, advisory-driven approach to AI regulation rather than a single comprehensive AI law. The framework combines MeitY advisories issued under the IT Act, 2000, the Digital Personal Data Protection Act, 2023 for AI data processing, existing IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, and sector-specific regulations from SEBI, RBI, and IRDAI. MeitY has signalled a preference for industry self-certification and responsible AI principles over prescriptive legislation.
What are the key MeitY advisories on AI that tech companies must know?
MeitY issued two critical advisories in March 2024. The first advisory (March 1, 2024) required government approval before deploying under-tested or unreliable AI models. The revised advisory (March 15, 2024) replaced the approval requirement with a self-certification and labelling obligation, requiring AI platforms to label AI-generated content, avoid bias, and ensure outputs do not violate Indian law. These advisories apply to all companies operating AI platforms in India.
Does India have a dedicated AI law like the EU AI Act?
No. As of 2026, India does not have a standalone AI legislation comparable to the EU AI Act. Instead, AI is regulated through a combination of MeitY advisories, the IT Act 2000, IT Intermediary Rules 2021, the DPDP Act 2023, and sector-specific regulators. The government has indicated that a dedicated AI regulatory framework may evolve from the current advisory-based approach, but the priority remains enabling innovation through light-touch regulation.
What is the Digital Personal Data Protection Act's impact on AI companies?
The DPDP Act, 2023 directly impacts AI companies processing personal data of Indian users. Key obligations include: obtaining valid consent before processing personal data for AI training, providing notice of purpose for data collection, ensuring data accuracy in AI models, implementing reasonable security safeguards, and enabling data erasure rights. AI companies processing children's data face additional restrictions under Section 9. Non-compliance carries penalties up to ₹250 crore.
What is the AI self-certification requirement under MeitY guidelines?
Under MeitY's revised advisory of March 15, 2024, AI platforms deploying generative AI models, large language models (LLMs), or AI chatbots accessible to Indian users must self-certify that their models do not generate content that violates Indian law, are labelled to inform users that outputs are AI-generated, and do not exhibit algorithmic bias. This replaces the earlier government approval requirement with a compliance-by-design approach.
Which companies are covered under India's AI regulation framework?
The MeitY advisories and IT Intermediary Rules apply to all intermediaries and platforms deploying AI models in India, including: AI-as-a-service providers, companies using generative AI in consumer-facing products, platforms offering AI chatbots or virtual assistants, companies training AI models on Indian user data, and foreign AI companies serving Indian users. Both startups and established enterprises must comply.
What are the labelling requirements for AI-generated content in India?
MeitY's March 2024 advisory requires that all AI-generated or AI-assisted content must carry clear labels or watermarks indicating it was generated by AI. This applies to text, images, audio, and video outputs from generative AI platforms. The labelling must be persistent (not easily removable), conspicuous (clearly visible to end users), and applied through metadata or embedded identifiers where technically feasible.
What is the IndiaAI Mission and how does it affect compliance?
The IndiaAI Mission, launched in March 2024 with an allocation of ₹10,372 crore, is a government initiative to build AI compute infrastructure, develop foundational AI models, and create an AI innovation ecosystem. For tech companies, the mission creates opportunities to access subsidized GPU compute through empanelled cloud providers, participate in government AI projects, and align with India's responsible AI standards. Companies registered under Startup India may receive preferential access.
How does Section 79 of the IT Act apply to AI platforms?
Section 79 of the Information Technology Act, 2000 provides safe harbour protection to intermediaries, including AI platforms, from liability for third-party content. However, this protection is conditional: AI platforms must comply with the IT (Intermediary Guidelines) Rules, 2021, implement due diligence measures, appoint grievance officers, and take down unlawful content within stipulated timelines. An AI platform that generates content rather than merely hosting it may not qualify as an intermediary, potentially losing safe harbour protection.
What are the penalties for non-compliance with AI regulations in India?
Penalties vary by the specific regulation violated. Under the DPDP Act, 2023: up to ₹250 crore for data breaches and up to ₹200 crore for non-compliance with data processing obligations. Under the IT Act, 2000: Section 43A imposes compensation for negligent data handling, and Section 69A empowers the government to block non-compliant AI platforms. MeitY advisories, while not carrying direct statutory penalties, can trigger enforcement action under the IT Intermediary Rules, including loss of safe harbour protection.
How does India's AI regulation compare to the EU AI Act?
The EU AI Act uses a risk-based classification system (unacceptable, high, limited, minimal risk) with mandatory compliance requirements, third-party audits, and fines up to €35 million or 7% of global turnover. India's approach is advisory-driven and principles-based, relying on self-certification, sector-specific regulation, and the DPDP Act for data protection. India does not classify AI systems by risk tier, does not require third-party conformity assessments, and prioritizes innovation enablement over prescriptive compliance.
Do AI startups need any special registration or licence in India?
No special AI-specific licence or registration exists in India as of 2026. AI startups register as standard business entities, typically as a Private Limited Company, and comply with applicable regulations based on their sector and data processing activities. However, AI startups should obtain DPIIT Startup India recognition for tax benefits and funding access, register for DPDP Act compliance when processing personal data, and consider patent registration for proprietary AI algorithms.
What is NITI Aayog's role in India's AI governance framework?
NITI Aayog has published foundational policy documents on responsible AI, including 'Responsible AI for All' (2021) and the National Strategy for Artificial Intelligence (#AIforAll). These documents outline seven principles: safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability, and positive human values. While not legally binding, these principles inform MeitY's regulatory approach and are referenced in government AI procurement standards.
What are the data localization requirements for AI companies in India?
The DPDP Act, 2023 permits cross-border data transfers except to countries specifically restricted by the Central Government through notification. As of 2026, no restricted country list has been published. However, AI companies must ensure that personal data processing complies with DPDP Act provisions regardless of where the data is stored. Sector-specific localization requirements from RBI (payment data) and CERT-In (cybersecurity incident data) apply additionally to AI platforms in those sectors.
How should AI companies prepare for upcoming regulation changes?
AI companies should implement a compliance-readiness framework that includes: mapping all personal data processed by AI models against DPDP Act requirements, implementing AI content labelling systems per MeitY advisories, documenting algorithmic decision-making processes for transparency, appointing a Data Protection Officer where required, conducting regular bias audits on AI models, and engaging professional compliance advisory services. Building compliance infrastructure now reduces the cost of adapting to future regulations.
What is the role of sector-specific regulators in AI governance?
Beyond MeitY, several sector regulators have issued AI-specific guidance. RBI regulates AI use in lending decisions and credit scoring, requiring explainability in automated credit decisions. SEBI oversees AI-driven trading algorithms and robo-advisory platforms under existing market regulations. IRDAI monitors AI in insurance underwriting and claims processing. TRAI addresses AI in telecom services. Companies operating AI in regulated sectors must comply with both MeitY's general framework and sector-specific requirements.
Can foreign AI companies operate in India without a local entity?
Foreign AI companies can serve Indian users remotely, but face compliance obligations including DPDP Act data processing requirements, IT Intermediary Rules (if classified as intermediaries), and potential significant data fiduciary obligations requiring a local Data Protection Officer and an Indian point of contact. For sustained operations, establishing a local entity through company registration in India is strongly recommended to manage regulatory compliance, enter government contracts, and access IndiaAI Mission resources.
What intellectual property protections are available for AI innovations in India?
AI companies can protect innovations through multiple IP mechanisms: patent registration for novel AI algorithms, methods, and architectures (subject to the Patents Act, 1970 Section 3(k) exclusion for computer programs per se, with patentability possible when the algorithm produces a technical effect), trademark registration for AI product branding, copyright protection for training datasets and software code, and trade secret protection for proprietary model weights and training methodologies.
What are the reporting obligations for AI-related cybersecurity incidents?
Under CERT-In directives (April 2022), all companies including AI platforms must report cybersecurity incidents within 6 hours of detection. AI-specific incidents such as data poisoning attacks, model theft, adversarial manipulation, or unauthorized access to training data fall under this reporting obligation. The DPDP Act, 2023 additionally requires notification to the Data Protection Board and affected users in case of personal data breaches, with specific timelines to be notified by the Board.
How does the IT (Intermediary Guidelines) Rules, 2021 affect AI chatbots?
AI chatbots deployed on platforms with Indian users are subject to the IT Intermediary Rules, 2021, which require: publishing terms of service and privacy policies, appointing a Grievance Officer and Chief Compliance Officer (for significant social media intermediaries), implementing content moderation mechanisms, responding to government takedown orders within 36 hours, and enabling user complaint resolution within 15 days. AI chatbots that generate rather than host content may face additional scrutiny regarding intermediary classification.
What tax benefits are available for AI companies in India?
AI companies incorporated in India can access several tax benefits: Section 80-IAC provides a 3-year tax holiday for DPIIT-recognized startups (turnover up to ₹100 crore), Section 35(2AB) offers weighted deduction on in-house R&D expenditure for AI development, the concessional corporate tax rate of 15% under Section 115BAB applies to new manufacturing companies (including AI hardware), and angel tax abolition (effective April 2025) removes Section 56(2)(viib) barriers for AI startup fundraising. Startup India registration is the first step to accessing these benefits.
Tags:

Dhanush Prabha is the Chief Technology Officer and Chief Marketing Officer at IncorpX, where he leads product engineering, platform architecture, and data-driven growth strategy. With over half a decade of experience in full-stack development, scalable systems design, and performance marketing, he oversees the technical infrastructure and digital acquisition channels that power IncorpX. Dhanush specializes in building high-performance web applications, SEO and AEO-optimized content frameworks, marketing automation pipelines, and conversion-focused user experiences. He has architected and deployed multiple SaaS platforms, API-first applications, and enterprise-grade systems from the ground up. His writing spans technology, business registration, startup strategy, and digital transformation - offering clear, research-backed insights drawn from hands-on engineering and growth leadership. He is passionate about helping founders and professionals make informed decisions through practical, real-world content.