Navigating the Future: Legal Perspective on AI, Responsibility, and Consciousness
- Aequitas Victoria
- 3 days ago
- 19 min read
Paper Code: AIJACLAV13RP2025
Category: Research Paper
Date of Publication: May 19, 2025
Citation: Ms. M Kiruba & Ms. Shreedevi S, “Navigating the Future: Legal Perspective on AI, Responsibility, and Consciousness", 5, AIJACLA, 134, 134-145 (2025), <https://www.aequivic.in/post/navigating-the-future-legal-perspective-on-ai-responsibility-and-consciousness>
Author Details: Ms. M Kiruba, 3rd Year Law Student, Vellore Institute of Technology (Chennai Campus) &
Ms. Shreedevi S, 3rd Year Law Student, Vellore Institute of Technology (Chennai Campus)ampus)
Abstract
Artificial Intelligence (AI) has become an integral part of daily life, streamlining tasks and enhancing efficiency across industries. However, its increasing role in content creation and innovation raises complex legal questions regarding ownership and liability. The debate is divided—some argue that AI should be granted ownership rights and held liable for its actions, while others maintain that the creators or developers should bear these responsibilities.
Despite the rapid advancement of AI, existing intellectual property and liability frameworks remain inadequate in addressing these challenges. Current legal doctrines do not explicitly define AI’s status in relation to intellectual property rights, leaving significant gaps and ambiguities. This paper critically examines the intersection of AI, intellectual property, and privacy laws, analyzing whether AI-generated works can be legally protected, who should bear responsibility for AI-related violations, and how international legal frameworks address these issues. By assessing global approaches and identifying potential reforms, this study aims to provide a clearer path toward an equitable and effective legal framework for AI ownership and liability.
Keywords: Artificial intelligence, Intellectual property, Reforms, Ownership
INTRODUCTION:
Artificial Intelligence (AI) has quickly changed a variety of industries, including data processing and the creative arts. This has led to complicated legal issues including privacy, responsibility, and ownership. With the increasing sophistication of AI-generated works, like deepfake videos, music, books, and innovations, traditional intellectual property (IP) and privacy rules are finding it difficult to keep up. Finding the owner of AI-generated content and assuming accountability when AI infringes upon patents, copyrights, or individual privacy are important issues. Who should have the rights—the person who created the AI, the person using it, or nobody?
Beyond ownership, AI poses issues with responsibility. Should the corporation that developed the AI system, the user who used it, or the AI itself be held responsible if it produces offensive material, violates patents, or abuses personal information? While some contend that, like companies, AI should have legal personality, others maintain that since AI is not sentient, it cannot have rights or obligations.
The present research investigates these critical issues by looking at how AI, IP law, and privacy laws interact. It seeks to evaluate how various legal systems throughout the globe are addressing these issues, as well as the legal standing of AI-generated works, potential risks of copyright infringement, and data privacy abuses. Addressing these legal loopholes is essential given the speed at which AI is developing, not only to safeguard enterprises and human creators but also to guarantee the moral and equitable use of AI-driven technology. This study intends to add to the continuing discussion on how to govern AI while encouraging innovation and upholding legal responsibility by examining current legal frameworks, court decisions, and international legislation. It also makes recommendations regarding whether IP laws should be amended.[1]
METHODOLOGY:
Using an empirical and multidisciplinary approach, the non-doctrinal technique gathers both qualitative and quantitative data via surveys, interviews, and case studies. Stakeholder viewpoints, comparative analysis, and field research are used to examine the sociological, ethical, and legal ramifications of AI. This approach guarantees an understanding that goes beyond theoretical legal frameworks and is grounded on evidence.
FINDINGS:
Artificial intelligence-generated creations are not legally protected by current intellectual property laws, which primarily acknowledge human authorship.
Courts have repeatedly held that AI cannot own patents or copyright; human intervention is necessary for legal recognition.
More detailed regulations are necessary to avoid legal issues since AI-generated trademarks present infringement concerns.
Privacy regulations, like as India's Digital Personal Data Protection Act (DPDPA), 2023, find it difficult to control how AI collects data, especially when it comes to online scraping and profiling.
The question of whether to give AI legal personhood is still up for debate since doing so raises questions about accountability, culpability, and moral ramifications.
LITERATURE REVIEW:
1 Technical, Legal and Ethical Opportunities and Challenges of Governing Artificial intelligence in India-Mabel V. Paul. This paper assesses how privacy regulations, intellectual property, and AI are changing together. The study raises issues about authorship, responsibility, and ethical dangers by pointing out the absence of legal frameworks addressing works produced by AI. In his discussion of India's legal environment, Paul highlights how inadequate the country's present intellectual property rules are to acknowledge AI as an author or creator. The Digital Personal Data Protection Act of 2023's shortcomings in regulating AI-driven data collecting are also criticized in the report. Additionally, it examines global legal frameworks, including the European Union's AI-specific laws, in order to suggest a methodical framework for AI governance in India. In order to guarantee responsible AI innovation while upholding responsibility, the paper emphasizes the necessity of legal changes. The results add to the larger discussion of how to balance advancements in technology with moral and legal protections.[2]
2 Artificial Consciousness in AI: A Posthuman Fallacy- M. Prabhu and J. Anil Premraj
The paper assesses how artificial consciousness is portrayed in science fiction. Examining posthumanist viewpoints, technological singularity, and superintelligence, the research compares the limitations of AI in the actual world with how it is frequently portrayed in movies. By critically examining neuroscientific ideas and cognitive robotics, it makes the case that concerns about AI becoming really aware are more based in science fiction than in fact. Examining the philosophical ramifications of human-machine interactions, the study draws attention to ethical issues and difficulties in AI development.[3]
3 An Evolutionary Step in Intellectual Property Rights – Artificial Intelligence and Intellectual Property- Colin R. Davies
Davies (2011) assesses the shortcomings of current intellectual property (IP) laws in addressing the complexity of AI-generated works. He looks at the implications of UK copyright and patent law, specifically Section 9(3) of the Copyright, Designs, and Patents Act 1988, which gives authorship of computer-generated works to the person who arranges their creation. Davies contends that AI has progressed beyond being a tool and suggests giving AI systems legal personality for IP ownership. This viewpoint challenges conventional legal definitions and emphasizes the need for proactive legislative reform to address AI's growing role in innovation and creativity.[4]
4 History of Artificial Intelligence- Jolly Tewari and Malobika Bose
A thorough historical dive into artificial intelligence (AI), from its first philosophical investigations to more recent developments in deep learning and large data, is given by Tewari and Bose (2023). Important turning points are highlighted in the article, such as the advancement of neural networks and expert systems and the use of AI in fields including healthcare, finance, and law. In addition, the authors address ethical and legal issues, highlighting the necessity of AI frameworks and legislation to promote responsible innovation. Their work demonstrates AI's quick development and revolutionary social effects.[5]
HISTORICAL BACKGROUND:
Evolution of AI and Intellectual Property Rights (IPR)
The evolution of Artificial Intelligence (AI) and Intellectual Property Rights (IPR) is a intersection of technological development and legal adaptation. The follow content provides era wise adaptation and evolution of AI and IPR.
The Pre-Digital Age: Foundations of Intellectual Property (Ancient – Early 20th Century)
Intellectual property roots trace back to ancient civilizations, with formal laws emerging in Venice (1474). England’s Statute of Monopolies (1624) and Statute of Anne (1710) laid the foundation for patent and copyright laws. The Paris (1883) and Berne (1886) Conventions enabled international IP protection, leading to the formation of WIPO (1967).
The Birth of Artificial Intelligence (1950s–1980s)
John McCarthy's Dartmouth Conference (1956) and Alan Turing's Turing Test (1950) marked the beginning of artificial intelligence. Expert systems and symbolic thinking were the core topics of early AI research, but IP rules were still human-centric and did not include any protections for works produced by AI.
AI’s Expansion in the Digital Age (1990s–2010s)
Advancements in machine learning enabled AI-generated content in software, music, and art. The TRIPS Agreement (1994) set global IP standards, but courts struggled with AI-generated content rights. The Thaler v. Vidal[6] case reaffirmed that only humans could hold patents, raising ownership and accountability concerns.
AI-Generated Works and Legislative Response (2020s–Present)
Growing AI-driven innovation has prompted legal reforms:
a. South Africa granted a patent to AI DABUS[7] as an inventor.
b. The EU is drafting AI-specific IP regulations.
c. India is considering updates to its copyright and patent laws.[8]
THE STATUS QUO OF LEGAL DOCTRINE
Domestic Legislations
Copyright Law and AI
In India, copyright protection is primarily governed by the Copyright Act, 1957. A fundamental prerequisite for copyright protection under Section 13[9] is originality. According to Indian courts' traditional interpretation of originality, which derives from human intellectual effort, AI-generated works are not eligible for copyright ownership under current legislation. The argument revolves around whether AI produces work that satisfies the originality requirement, considering that AI systems are taught on enormous datasets that have been carefully selected by human creators. Some contend that because AI models are programmed and trained using human input, works produced by AI should be protected by copyright. In contrast, others argue that AI fails the originality test because it lacks human inventiveness.
Since only natural individuals are recognized as authors under Section 2(d) of the Copyright Act, establishing authorship presents another difficulty. Ambiguity arises because AI lacks legal personality, making it difficult to identify the author of works produced by AI. Lawmakers must change existing laws to handle ownership and originality in AI-generated works as AI is not legally recognized as an author.[10]
Patent Law and AI
The Patents Act, 1970, which governs Indian patent law, stipulates that an invention must be new, incorporate an innovative step, and be applicable in industry. AI systems are creating novel medications, materials, and technologies, demonstrating the growing significance of AI in innovation. AI is not yet recognized as an inventor under Indian patent law, though. A "natural person" is the only individual who may file for a patent under Section 2(1)[11] of the Patents Act, meaning that AI-generated ideas are not protected. Without direct human involvement, figuring out if an AI-generated idea satisfies the "inventive step" requirement is difficult. To balance the contributions of humans and machines and support AI-driven innovation, the Indian patent system must amend accordingly to resolve the ambiguities.[12]
Trademark Law and AI
India's trademark law, which is governed by the Trademarks Act, 1999[13], mainly protects brand identities. AI is being used more and more in branding, including the creation of logos and slogans. While AI-generated trademarks are legally valid, trademark applications must be filed by a natural person or legal entity. Another major concern is the possibility that AI-generated trademarks will infringe on existing marks; systems trained on large datasets may unintentionally create names or designs that are similar to existing trademarks, which could result in legal disputes. To prevent infringement and maintain fair competition, it is necessary to establish clear guidelines regarding the distinctiveness and protectability of AI-generated trademarks.
Designs Act and AI
Industrial designs are governed by the Designs Act of 2000, which mandates human authorship and uniqueness. Artificial intelligence (AI)-generated industrial designs are challenging to protect under existing rules since Section 2(d)[14] of the Act requires that a design be made by a human. Legal issue surrounds the ownership and registration of AI-generated designs as AI becomes more and more integrated into product design and development. Businesses that depend on AI for innovation may encounter difficulties obtaining design protection if AI-assisted innovations are not explicitly recognized.[15]
International Perspective
Intellectual Property Organization (WIPO)
WIPO is a key contributor in creating international IP laws pertaining to AI. Its 2019 Technology Trends Report emphasizes how AI affects copyrights and patents. Important concerns concerning AI authorship, inventorship, and ownership are brought up in the Revised Issues Paper on IP Policy and AI. In order to handle AI-generated works and AI-assisted breakthroughs, WIPO is attempting to create consistent laws across nations.[16]
Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS Agreement)
The TRIPS Agreement, which is governed by the World Trade Organization (WTO), does not specifically address AI-generated material but does provide basic IP protection requirements for participating nations. Discussions over whether TRIPS should be changed to cover AI-generated outputs have been triggered by the growing usage of AI in creative and industrial domains. Under the national treatment principle of TRIPS, countries are free to establish their own AI-IP policies; yet legal problems arise from jurisdictional discrepancies. The extension of intellectual property rights to AI-generated concepts may be discussed in future WTO sessions.[17]
Berne Convention for the Protection of Literary and Artistic Works (1886)
The Berne Convention presumes a human author yet automatically gives creators copyright protection. Legal concerns of ownership and protection are brought up by AI-generated literature, art, and music. Some contend that AI-generated content should be attributed to its human creator, while others suggest a sui generis right just for AI-generated works. There is growing need to revise the norm as AI-generated creative works proliferate.[18]
Paris Convention for the Protection of Industrial Property (1883)
The Paris Convention lays out essential guidelines for international trademarks and patents, such as priority rights and national treatment. Like the Berne Convention, it does not, however, take into consideration inventions produced or aided by AI. As seen in the DABUS case, AI-generated patents are rejected since most nations need a human inventor on patent applications. There are ongoing discussions on whether AI should be considered an inventor, and as more AI-driven inventions are made, changes may be required in the future.[19]
LEGAL PERSONALITY OF AI
The growing independence and inventiveness of artificial intelligence (AI) have generated discussions over whether AI deserves legal status. While companies have legal personhood despite not having a physical form or human mind, some contend that AI may be recognized similarly. One example of a trend towards recognizing AI's contributions to innovation is the DABUS case[20], in which South Africa granted a patent to an AI-generated idea. But courts throughout the world have mainly disregarded this idea. The court in Thaler v. Vidal (US)[21] noted that AI lacked intent, moral reasoning, and autonomous accountability—factors necessary for legal recognition—and decided that only humans may be acknowledged as inventors under patent law.[22]
Beyond patents, personality rights are an issue due to AI's involvement in identity replication and content production. Misuse of AI has increased because of its capacity to produce incredibly lifelike digital material, such as voice clones, movies, and photos. For instance, deepfakes produced by AI have been used to produce unapproved celebrity likenesses. In one prominent instance, Bruce Willis[23], despite having minimal input, had his appearance digitally recreated using AI for a commercial. The ethical and legal concerns around AI's capacity to mimic human identity are brought to light by this instance.
Legal frameworks have started to adapt to these difficulties. In Anil Kapoor v. Simply Life & Ors[24], the Delhi High Court upheld the value of personality rights by ruling in Favor of shielding a celebrity's name, image, and likeness from AI-driven exploitation. Similarly, rising worries over AI-generated content exploiting copyrighted material without permission are seen in litigation like Andersen v. Stability AI[25]. The emergence of generative AI technologies, which may produce material that is indistinguishable from human-produced content, has complicated legal systems by making it harder to ascertain ownership and attribution.
Even while AI is capable, giving it complete legal personhood comes with serious concerns. The accountability gap is a significant worry as AI may protect developers and businesses from liabilities if it were regarded as a separate legal entity. This might lead to gaps where AI-generated behaviour, whether it be in the form of disinformation, defamation, or copyright infringement, lacks obvious responsibility. Enforcing duty would be extremely difficult without human monitoring.
Adopting a hybrid legal paradigm is a more pragmatic strategy. According to this concept, AI-generated works may be given some legal protections like copyrights or patents, but the legal burden would still rest with human developers, artists, or businesses in charge of AI operations. This will preserve legal accountability within current frameworks while guaranteeing that AI-generated content is appropriately acknowledged. Furthermore, legislative changes are necessary to provide precise rules for AI's use in identity protection, intellectual property, and content production.
AI legalization needs to reconcile human accountability with technology advancement. Current legal systems are not yet prepared to support complete AI personhood, even though AI is still developing. Policies must be centred on preserving intellectual property, defending the rights of individuals, and guaranteeing the ethical use of AI. To ensure that AI-generated works are acknowledged while keeping human stakeholders accountable, governments and legal institutions must strive toward more transparent AI legislation. The legal system may avoid AI abuse and promote responsible innovation by implementing structured laws, guaranteeing that technology breakthroughs do not compromise moral and legal integrity.[26]
AI Cannot Possess Consciousness
AI's Lack of Consciousness and Legal Rights
The question of whether AI can possess consciousness has long been a subject of debate. This concept is both intellectually stimulating and highly complex, as it challenges fundamental understandings of the nature of mind, cognition, and the possibility of non-biological entities exhibiting sentient behavior. Popular culture, particularly in films, often portrays robots as self-aware beings capable of independent thought and actions to protect their identity.
Sahu and Karmakar (2022) argue that "the programmed machine acts almost like a mind. It stands between the animate and the inanimate, thus evolving as a non-animate being occupying a space." This perspective suggests that AI operates in a liminal state, challenging traditional classifications of life and consciousness.
However, the opposing viewpoint asserts that AI merely executes programmed commands without possessing beliefs, opinions, emotions, or independent thought. AI functions as a reflection of its creator's instructions rather than an autonomous entity. Consequently, assigning ownership or liability to AI remains infeasible due to its fundamental lack of consciousness[27].
Intellectual Property and Human Authorship
Intellectual property (IP) law is primarily concerned with protecting human creativity and invention. A fundamental requirement in IP law is human intervention in the creation of a work. Currently, India's legal framework does not grant separate rights to works generated solely by AI. This approach aligns with Western jurisprudence, which recognizes only human authorship in intellectual property rights.
The recent case of ANI vs. OpenAI [28]has raised significant questions regarding OpenAI's data usage and the extent of AI’s role in creative processes. This legal stance reinforces the principle that human creativity and innovation are central to intellectual property protection.
Contrasting AI with Human Creators in IP Frameworks
Traditional human creators possess consciousness, allowing their creations to be infused with emotions, intentions, and personal expressions. Intellectual property (IP) law has historically recognized and protected such works based on the premise that creativity stems from human agency.
As AI continues to advance, approaching and even surpassing human cognitive abilities in reasoning, problem-solving, and situational awareness, it raises complex legal and ethical questions. The evolving definition of personhood—already extended to fictional corporate entities—may be challenged in unprecedented ways. However, the issue at hand is not merely whether AI can achieve a form of "sentience" familiar to humans but whether its capabilities warrant legal recognition within the IP framework.[29]
Despite AI’s increasing autonomy, its outputs lack independent intention and self-derived expression, distinguishing them from human-created works. This distinction is crucial in determining whether AI-generated content should be granted ownership rights or if liability should be assigned to AI entities. The question remains: should AI be granted a form of personhood within legal and intellectual property frameworks, or should its role remain that of an advanced tool under human ownership and control?
AI LIABILITY: PRODUCT OR SERVICE?
A fundamental principle of legal liability is that when an individual commits a crime, they are held accountable and punished accordingly. While exceptions exist based on the individual's knowledge or intent, liability generally stems from taking responsibility for one’s actions. Every person has a duty to avoid committing crimes, and failure to do so results in legal consequences. However, when it comes to AI, the question arises: Who should be held responsible—the creator, the user, the AI itself, or the owner?
One approach to addressing AI liability is through the concept of product liability, which holds manufacturers or sellers accountable for defective products that cause harm. Establishing product liability involves three key elements: (1) the item must qualify as a product, (2) harm must have occurred, and (3) the product must be defective. These factors, along with causation and the burden of proof, determine liability. However, India currently lacks a dedicated legal framework for AI-related liabilities. This section critiques the Consumer Protection Act (CPA), 2019, and explores potential ways to incorporate AI liability into the Indian legal system.
Is AI a Product or a Service?
Under Section 2(33) of the Consumer Protection Act, 2019, a product is defined as:
"Any article or goods or substance or raw material or any extended cycle of such product, which may be in gaseous, liquid, or solid state possessing intrinsic value which is capable of delivery either as wholly assembled or as a component part and is produced for introduction to trade or commerce, but does not include human tissues, blood, blood products, and organs."[30]
Some may argue that AI could fall under the category of "any extended cycle" of a product, but this remains a speculative interpretation. As of now, no Indian law explicitly classifies AI systems as products.[31]
On the other hand, Section 85 of the CPA defines a product service provider as an entity that:
"Provides a service that is faulty, imperfect, deficient, or inadequate in quality, nature, or manner of performance, as required by law, contract, or otherwise."[32]
A company offering AI-related services may be held liable under this provision, and courts often apply negligence as the standard for assessing liability. However, AI’s autonomous nature complicates this issue, as liability in traditional service contexts is usually assigned to developers rather than users. If AI is classified as a service, users might bear a disproportionate share of liability despite having limited control over its functioning.
Challenges of Applying Negligence to AI
Applying negligence as a liability framework for AI presents four key challenges:
Lack of user expertise – AI users may lack the technical knowledge to predict AI failures, making it difficult to establish a reasonable standard of care.
Contradiction with AI’s purpose – AI is designed to reduce human effort, and requiring constant human oversight undermines this objective.
Limited user control – Negligence assumes users can mitigate risks, but AI’s complexity makes such control unrealistic, leading to unfair liability allocation.
Bias and discrimination risks – AI’s potential for biased decision-making complicates claims of discrimination, further complicating liability assessments.
AI AND PRIVACY LAW: THE DIGITAL PERSONAL DATA PROTECTION ACT, 2023
With the rapid growth of AI applications, the protection of personal data has become increasingly critical. The Digital Personal Data Protection Act (DPDPA), 2023, [33]aims to enhance accountability among AI developers regarding the collection, processing, and use of personal data. The Act mandates compliance with data protection principles and emphasizes the need for developers to prevent potential violations during AI model training.[34]
One of the primary challenges posed by the DPDPA concerns AI training methodologies, particularly web scraping. Obtaining individual consent for large-scale data collection is often impractical, making compliance difficult[35]. However, the Act includes a broad exemption for publicly available data, allowing AI models to train on such data without restrictions[36]. This exemption is notably broader than similar provisions in the General Data Protection Regulation (GDPR), [37]China’s Personal Information Protection Law (PIPL), [38]and Singapore’s Personal Data Protection Act (PDPA)[39]. Despite this, concerns remain regarding the legality of data scraping and whether the DPDPA applies to data that was once public but later removed.[40]
Additionally, the Act permits data processing for research, archiving, and statistical purposes, [41]provided it adheres to government-prescribed standards. However, it remains unclear whether private AI firms can benefit from this exemption, raising concerns about the applicability of these provisions to commercial AI development.
Another significant limitation of the DPDPA is its restricted extraterritorial scope. Unlike the GDPR, which applies to the monitoring and profiling of individuals, the DPDPA applies only to foreign entities offering goods or services in India. This creates a regulatory gap that could allow offshore AI firms to engage in unregulated data scraping and profiling of Indian citizens without facing legal consequences under Indian law.
To enhance regulatory oversight, the Act introduces the concept of Significant Data Fiduciaries (SDFs)—AI firms handling large volumes of sensitive personal data may be classified as SDFs. Such entities would be required to appoint a Data Protection Officer (DPO), conduct independent audits, and perform Data Protection Impact Assessments (DPIA). However, the absence of clear criteria for SDF classification raises concerns about potential arbitrary enforcement and excessive regulatory discretion over AI-driven companies.
CHALLENGES AND CRITICISM
AI’s intersection with intellectual property and privacy raises critical legal and ethical challenges. Ownership remains uncertain, as most legal systems recognize only human creators, excluding AI-generated works. Courts have rejected AI authorship, discouraging investment in AI-driven creativity. Copyright concerns arise as AI models train on copyrighted content without consent, leading to disputes over fair use. Privacy risks include mass surveillance, data collection without consent, and deepfake misinformation. Liability remains unresolved, with uncertainty over responsibility between developers, users, and companies. AI bias further exacerbates discrimination in hiring and law enforcement. Regulatory inconsistencies across jurisdictions make enforcement difficult—overregulation stifles innovation, while under regulation leaves gaps for misuse. Proposals for AI personhood raise accountability concerns. As AI challenges traditional notions of creativity, privacy, and human rights, legal frameworks must evolve to balance technological progress with individual rights, ensuring AI serves society while upholding fundamental legal principles.
CONCLUSION
The evolution of artificial intelligence systems raises major legal questions in three key areas: intellectual property rights, privacy protections and liability standards. Currently, the Indian legal system does not have provisions that recognize AI as an inventor or author and hence, AI created works are not protected under the law. This is because AI processes huge amounts of personal data which is used without the owner’s consent, and this leads to deepfakes and surveillance. This is because the liability issues have not been well defined and it is not clear whether the developers of the AI, the users of the AI or the corporations use the AI are responsible for the harm caused by the AI. This means that there is a need to find a balance between the need to promote innovation and the need to ensure accountability in the implementation of the legal frameworks to address these gaps. It is possible that certain AI generated works could be entitled to some limited protection, but for this to happen, human control would still be required. This paper has also established that the risks can be reduced through the enforcement of stronger data protection laws and more specific liability frameworks for AI. It is therefore important that governments, legal scholars, and technology experts work together to guarantee that the benefits of AI are used in a responsible manner and to develop a legal system that encourages innovation without compromising on intellectual property, privacy and accountability.
[1] Jolly Tewari & Malobika Bose, 'History of Artificial Intelligence' (2023) 5 Indian Journal of law & Legal Research
[2] Mabel V Paul, ‘Technical, Legal and Ethical Opportunities and Challenges of Governing Artificial Intelligence in India’ (2023) 5 Indian Journal of Law and Legal Research 1
[3] M. Prabhu and J. Anil Premraj, ‘Artificial Consciousness in AI: A Posthuman Fallacy’ (2024) AI & Society https://doi.org/10.1007/s00146-024-02061-4.
[4] Colin R Davies, ‘An Evolutionary Step in Intellectual Property Rights – Artificial Intelligence and Intellectual Property’ (2011) 27 Computer Law & Security Review 601
[5] Jolly Tewari and Malobika Bose, ‘History of Artificial Intelligence’ (2023) 5 Indian Journal of Law and Legal Research 1
[6] Thaler v Vidal, 43 F 4th 1207 (Fed Cir 2022)
[7] DABUS AI Patent Case, South African Companies and Intellectual Property Commission (CIPC).
[8] Jyh-An Lee, Reto M Hilty, and Kung-Chung Liu, Roadmap to Artificial Intelligence and Intellectual Property (Oxford University Press, 2021) https://ssrn.com/abstract=3802232 accessed (09-02-2025).
[9]Copyright Act 1957 (India) s 13.
[10]Dave K, 'Artificial Intelligence and Intellectual Property in India' (Parker & Parker, 30 September 2024) https://www.parkerip.com/blog/artificial-intelligence-and-intellectual-property-in-india/ accessed 12 February 2025.
[11] Patents Act 1970 (India) s 2(1).
[12]Teesha Hemangkumar Soni, Impact of AI Creations and IPR Framework (Dissertation, The Maharaja Sayajirao University of Baroda 2024) https://ssrn.com/abstract=4831898 accessed 12 February 2025.
[13]Trademarks Act 1999 (India).
[14] Designs Act 2000 (India) s 2(d)
[15]Arjit Benjamin, 'India's IP Laws Need To Adapt To AI Creativity' (2023) Bar & Bench https://www.barandbench.com/law-firms/view-point/indias-ip-laws-need-to-adapt-to-ai-creativity accessed 12 February 2025.
[16] Convention Establishing the World Intellectual Property Organization (signed 14 July 1967, entered into force 26 April 1970)
[17]Agreement on Trade-Related Aspects of Intellectual Property Rights (adopted 15 April 1994, entered into force 1 January 1995) 1869 UNTS 299.
[18] Berne Convention for the Protection of Literary and Artistic Works (adopted 9 September 1886, last revised 28 September 1979).
[19]Paris Convention for the Protection of Industrial Property (adopted 20 March 1883, last revised 14 July 1967).
[20] DABUS AI Patent Case, South African Companies and Intellectual Property Commission (CIPC).
[21] Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022).
[22] Lawrence B. Solum, ‘Legal Personhood for Artificial Intelligences’ (1992) 70 North Carolina Law Review 1231.
[23] Bruce Willis AI Commercial Incident, Variety, April 28, 2023.
[24] Anil Kapoor v Simply Life & Ors, 2023 SCC OnLine Del 5976
[25] Andersen v Stability AI (D Del, No. 3:23-cv-00201, filed 2023)
[26] Sudharsan S. Sri, ‘AI and Personality Rights: Legal Implications’ (2023) 6 Int’l JL Mgmt & Human 2278.
[27] M Prabhu and JA Premraj, ‘Artificial Consciousness in AI: A Posthuman Fallacy’ (2024) AI & Society https://doi.org/10.1007/s00146-024-02061-4 accessed [date]
[28] https://www.reuters.com/technology/artificial-intelligence/indian-news-agency-ani-sues-openai-unsanctioned-content-use-ai-training-2024-11-19/
[29] The Ethics and Challenges of Legal Personhood for AI’ (22 April 2024) https://www.yalelawjournal.org/forum/the-ethics-and-challenges-of-legal-personhood-for-ai?
[30] Consumer Protection Act 2019, s 2(33)
[31] Saurabh Vishwakarma, 'Understanding AI and Product Liability in India' (2023) Journal of Law and AI Regulation 15.
[32] Consumer Protection Act 2019, s 85.
[33] Digital Personal Data Protection Act 2023 (India).
[34] A Kumar and V Sharma, ‘AI and Data Protection: A Legal Perspective’ (2023) 15(3) Journal of Technology Law 45.
[35] R Singh, ‘Web Scraping and Privacy Law: Implications of the DPDPA’ (2023) 12(2) Indian Journal of Cyber Law 89.
[36] Digital Personal Data Protection Act 2023 (India), s 14
[37] Regulation (EU) 2016/679 (General Data Protection Regulation) art 3.
[38] Personal Information Protection Law of the People's Republic of China 2021, art 73
[39] Personal Data Protection Act 2012 (Singapore), s 4
[40] D Mishra and S Patel, ‘Publicly Available Data and AI Training: A Comparative Analysis of Privacy Laws’ (2023) 10(1) Technology and Law Review 34.
[41] Digital Personal Data Protection Act 2023 (India), s 17.