Check out our E-Book on Metaverse and Smart Contracts: Challenges and Key Considerations

Protecting Commercial Data and Intellectual Property in the Age of AI

Author: Gaurav Shanker, Managing Partner And Yamini Mishra, Associate |

Article by Business Law Chamber

The rapid integration of artificial intelligence (“AI”) into Indian commercial and professional ecosystems has transformed how organisations operate, streamlining workflows, enhancing decision-making, and redefining efficiency standards across sectors. Yet, this technological shift is not without risk. As businesses adopt AI-driven platforms to analyse, draft, or manage data, they also expose themselves to heightened vulnerabilities surrounding data privacy, confidentiality, and intellectual property (“IP”). Safeguarding sensitive information and proprietary assets has thus become not only a matter of commercial necessity but a continuing legal and ethical imperative in an AI-enabled environment.

THE EVOLVING RISK LANDSCAPE

India currently lacks a targeted law that defines data ownership or regulates the treatment of information shared with AI platforms. The Digital Personal Data Protection Act, 2023 (“DPDP Act”), though enacted, primarily focuses on protecting personal data. It grants individuals (data principals) certain rights, such as consent, correction, and erasure, but it neither extends to commercial or proprietary information nor does it expressly address the obligations of AI service providers processing such data.

In the absence of a clear legal framework, AI platforms, particularly those offering generative or chatbot functionalities, operate under their own terms of service, which often grant broad rights over the user-uploaded content. Once such data is shared, the owner may lose control over how it is stored, processed, or repurposed, particularly where contractual terms on ownership, liability and security are broadly worded or ambiguous.

CONFIDENTIALITY AND DATA SECURITY RISKS

The consequences of current gaps in legal protection are profound. Uploading confidential data, trade secrets, or information protected under non-disclosure obligations to AI systems could result in unauthorised access by vendors, subcontractors, or even other users through model outputs or system errors.

For instance, in March 2023, OpenAI confirmed a technical glitch in its platform “ChatGPT” that briefly exposed the titles of some users’ conversations to other users. The issue arose from a caching error in an open-source library, due to a bug for which a fix was promptly released Protecting Commercial Data and Intellectual Property in the Age of AI and validated. However, this incident highlights that even minor system vulnerabilities can compromise user confidentiality and erode trust in AI-driven tools.

Businesses, therefore, need to move beyond passive reliance on platform warranties or generic assurances from service providers and take proactive responsibility for securing their data.

INTELLECTUAL PROPERTY CHALLENGES IN AI SYSTEMS

The use of AI tools also raises complex legal questions around intellectual property rights in both inputs and outputs. Content generated with AI assistance, such as research notes, creative text, design elements, or code, blurs traditional notions of human authorship recognised under the Copyright Act, 1957, which protects only works created through human skill and judgment. Similarly, uploading strategic documents or other materials to AI platforms may expose organisations to the risk of their intellectual assets being incorporated into the platform’s broader dataset.

Key considerations include whether the platform’s terms of use grant it ownership or extensive usage rights over user-generated content and derived outputs; whether proprietary materials could be mined, adapted, or aggregated into shared datasets, diminishing exclusivity; and whether commercially sensitive information risks inadvertent disclosure through cloud-based or publicly accessible AI systems. While Indian law protects commercially sensitive and confidential information mainly through contractual instruments such as non-disclosure agreements (NDAs), and remedies such as injunctions, these safeguards offer limited recourse once proprietary information enters a global AI environment.

BUILDING SAFE AI PRACTICES: CONTRACTUAL, TECHNICAL, AND GOVERNANCE MEASURES

Against this backdrop, organisations must strengthen their contractual and operational defences to safeguard sensitive information and build long-term resilience. Key measures include:

  • Contractual Controls: When engaging with an AI vendor, ensure your agreement clearly addresses: ownership of input and output data, permitted uses of uploaded content (including whether the vendor may train models using it), deletion or return of the data at contract termination, and audit rights to verify compliance.
  • Technical Safeguards: Protect data through encryption, access restrictions, and secure storage. Use private or sandboxed environments wherever possible, and de-identify sensitive information before uploading to public AI tools.
  • Internal Governance: Establish internal AI-use policies defining what data may be shared, by whom, and under what conditions. Maintain usage logs, review permissions periodically, and conduct regular compliance audits.
  • IP and Access Management: Identify and label proprietary materials, implement rolebased access controls, and review vendor terms to confirm that intellectual property rights in all outputs remain with the organisation.
  • Transparency and Vendor Due Diligence: Responsible AI use depends on transparency and trust. Businesses should clearly communicate how the data is handled and work only with vendors that demonstrate strong data protection standards, accountability, and clear contractual commitments.

CONCLUSION

The future of business and regulatory practice will be inseparable from advanced AI systems. Yet, the legal uncertainty surrounding data ownership, confidentiality, and IP protection continues to pose serious risks for Indian organisations. Therefore, to harness AI responsibly, businesses must shift from passive acceptance of vendor terms to active governance, embedding due diligence, contractual safeguards, and robust technical controls into their operations.

Disclaimer: The views in this article are author's point of view. This article is not intended to substitute legal advice. In no event the author shall be liable for any direct, indirect, special or incidental damage resulting from or arising out of or in connection with the use of this information. For any further queries or follow up, please contact us at communication@businesslawchamber.com.