Artificial Intelligence Impacts on Intellectual Property Laws and Policy
- Aequitas Victoria
- 15 hours ago
- 19 min read
Paper Code: AIJACLAV18RP2025
Category: Research Paper
Date of Publication: May 19, 2025
Citation: Ms. Ratna Singh, “Artificial Intelligence Impacts on Intellectual Property Laws and Policy", 5, AIJACLA, 194, 194-208 (2025), <https://www.aequivic.in/post/artificial-intelligence-impacts-on-intellectual-property-laws-and-policy>
Author Details: Ms. Ratna Singh, Student, Babasaheb Bhimrao Ambedkar University Lucknow
Abstract
The integration of artificial intelligence (AI) into intellectual property (IP) law is rapidly transforming the landscape of ownership, regulation, and ethical considerations in both the creation and enforcement of IP rights. As AI technology advances, it challenges traditional notions of IP ownership, especially in determining the eligibility of AI-generated content for copyright and patent protection, and clarifying ownership between human creators and AI developers. This paper examines the regulatory and ethical implications of AI’s impact on IP law and policy, focusing on key issues such as the ownership of AI-generated IP, the ethical concerns surrounding automated IP compliance and surveillance, and the evolving role of regulatory bodies in adapting policies to accommodate AI’s influence. Ethical dilemmas emerge as AI systems take on more roles in IP enforcement, raising concerns about privacy infringements, surveillance, and potential biases in decision-making processes. These developments necessitate a balanced approach to IP law that ensures fairness, transparency, and accountability while fostering innovation. Policymakers face the challenge of creating new legal frameworks that can effectively manage IP rights in an AI-dominated world. This paper provides a forward-thinking analysis of the intersection between AI and IP law, emphasizing the need for proactive regulatory measures and ethical considerations to address the transformative role of AI in IP management and policy development.
Keywords: Intellectual Property Law, Artificial Intelligence, AI-dominated World, Regulatory Measures; Ethical Consideration, Transparency and Accountability
1. INTRODUCTION
In recent years, the rapid development of artificial intelligence (AI) has raised significant challenges for intellectual property (IP) law. As AI systems increasingly contribute to the creation of music, art, literature, and inventions, traditional IP frameworks—designed around human creators—are being tested. This shift calls into question issues such as ownership and copyright protection for AI-generated works.
In copyright law, a key issue is determining who holds the rights to works created by AI: the AI itself, the developer of the AI, or the user who directs the system. Current copyright laws generally do not recognize non-human creators, as seen in the Naruto v. Slater case [1], where a monkey's selfie was denied copyright protection. With AI-generated content, the boundary between human and machine input becomes increasingly blurred, complicating ownership determinations.
Patent law is also facing challenges, as AI systems are capable of generating inventions. In the DABUS case, both the European Patent Office (EPO) and the U.S. Patent and Trademark Office (USPTO) rejected applications listing AI as an inventor, emphasizing that patent law requires a human inventor. Some experts propose adapting IP law to allow for joint authorship when AI collaborates with humans, while others worry that granting IP rights to AI could undermine human creators and displace workers, raising ethical concerns.
Additionally, AI's ability to automate IP enforcement could lead to more efficient detection of copyright violations. However, this raises privacy and surveillance issues, as automated systems may infringe on individual rights without adequate safeguards.
To address these challenges, policymakers must update IP laws to account for AI’s role in creation and enforcement while balancing innovation, ethical principles, and human creators' rights.
1.2 Research Scope and Significance
The rapid advancement of artificial intelligence (AI) is significantly reshaping creative and inventive processes, prompting a critical reevaluation of intellectual property (IP) law. As AI systems become increasingly capable of autonomously generating content, such as artwork and scientific inventions, new legal, ethical, and regulatory challenges emerge. Traditional IP laws, designed with the assumption that human creators and inventors require protection, are proving insufficient in addressing the complexities of non-human creativity.
One primary concern is ownership. Traditionally, IP ownership has been clear, with human creators or inventors being the recognized owners. However, as AI generates content independently, questions arise regarding ownership. Should ownership be attributed to the programmer, the operator, or the AI itself? This ambiguity could lead to legal disputes, affecting industries reliant on AI-driven innovations.
AI also challenges compliance and enforcement within IP law. While AI technologies can efficiently detect infringements, issues of privacy, surveillance, and algorithmic bias must be addressed. Unchecked biases could lead to disproportionate enforcement, impacting the fair application of IP rights across different sectors. Additionally, the data collection required for automated enforcement raises privacy concerns and the potential for misuse of personal information.
Ethical dilemmas are central to the debate over AI-generated IP. Granting AI-created works the same legal protections as those by humans could diminish incentives for human creators, potentially altering the landscape of cultural and technological innovation. This raises concerns about the future of human creativity in an AI-dominated environment and how AI-generated works should be valued within broader societal contexts.
Regulation plays a crucial role in addressing these challenges. Existing IP laws lack provisions for non-human creators or inventors, leading to inconsistencies in judicial rulings. Understanding AI's impact on IP law is essential to identifying gaps in the current framework and guiding the development of a more inclusive, transparent, and adaptive legal structure. This approach will help balance technological progress with the protection of human creativity and ethical values, ensuring that IP law remains responsive to the evolving landscape of innovation.
2. WHO OWNS AI-GENERATED CONTENT? EXPLORING INTELLECTUAL PROPERTY LAWS
2.1 Defining Intellectual Property in AI-Generated Creations
AI-generated intellectual property (IP) refers to works autonomously or with minimal human input created by AI systems, challenging traditional boundaries between human and machine-generated content. AI's growing role in creative fields such as literature, art, music, and invention raises complex legal and philosophical questions about creativity and ownership. While traditional IP laws assume human authorship, cases like DABUS, where AI was listed as an inventor, highlight the limitations of existing frameworks, as courts ruled that only humans can hold such titles. To address these issues, scholars propose adapting IP laws by either excluding fully autonomous AI outputs from protection or establishing new classifications for AI-generated works, ensuring a fair balance between human creators' rights and AI's expanding role in innovation.
2.2 AI and Intellectual Property: Legal Challenges and Implications
2.2.1 Copyright Implications
The rise of AI-generated content poses significant challenges for copyright law, which traditionally assumes human authorship. Copyright protects "original works of authorship," granting creators exclusive rights over reproduction and distribution. However, as AI technologies like neural networks and generative adversarial networks (GANs) can now independently produce content in art, literature, and music, questions arise about how copyright law should apply to machine-generated works[2]. Many jurisdictions, including the U.S., require human authorship for copyright eligibility, complicating the protection of AI-generated works. The U.S. Copyright Office has stated that works created by machines with no human input or creative intervention are not eligible for copyright [3], and the 2018 U.S. case Naruto v. Slater[4] reinforced this principle by ruling that a photograph taken by a monkey lacked human authorship and therefore could not be copyrighted [5].
Proposed solutions include assigning copyright to AI developers or operators who provide creative direction or creating a separate category within copyright law for AI-generated works, granting limited rights based on human involvement. However, both approaches face challenges. Assigning authorship to developers may fail to fully acknowledge AI's autonomous creativity, while establishing a new legal category could complicate existing frameworks and enforcement mechanisms. These issues highlight the need for legal reforms to address the complexities of AI-generated content within traditional copyright structures.
2.2.2 Patent Implications
AI's capacity to autonomously generate patentable inventions poses significant challenges to traditional patent law, which typically requires a human inventor. Patent law protects innovations that meet criteria such as novelty, non-obviousness, and utility, granting exclusive rights to inventors. However, when AI systems create inventions independently, such as new chemical compounds or optimized algorithms, the legal framework’s human-centric requirements become problematic, as demonstrated in the DABUS case. In this instance, Dr. Stephen Thaler sought to have AI listed as the inventor for two creations, but patent offices in the U.S., Europe, and the U.K. rejected the applications, maintaining that only humans can be designated as inventors, thus reinforcing the limitations of current patent law[6].
This issue raises both practical and ethical concerns. Granting patents to non-human inventors could undermine human creativity and potentially allow companies to monopolize technological advancements by relying heavily on AI. On the other hand, excluding AI-generated inventions from patent protection could hinder investment in AI research and the commercialization of groundbreaking innovations. To address these challenges, some legal experts suggest updating patent laws to recognize "non-human inventorship" for inventions where AI demonstrates significant autonomy. This could involve limited protections or a redefinition of inventorship criteria to accommodate non-human creators, ensuring that patent laws remain relevant in an AI-driven technological landscape while maintaining the principles of innovation and fair competition [7].
2.3 Ownership Rights: Creators vs. AI Developers
2.3.1 Human Creator vs. Developer Rights
As AI systems increasingly generate valuable intellectual property, ownership disputes between human creators and AI developers are becoming more frequent. A central issue in these disputes is whether ownership rights should belong to the person using the AI tool or the developer who created the system. Traditional copyright and patent laws grant rights to human creators, but AI complicates this framework, as the creative or inventive process is often automated with minimal or no direct human input. For instance, a visual artist using AI to generate a painting might assume they own the copyright, but the AI developer could argue for partial or full ownership, asserting their system's critical role in the creation. This issue is also reflected in software licensing agreements, where developers often retain ownership of derivative works created using their AI tools, while users may only hold a license to the work[8].
In patent law, ownership questions arise when AI is involved in the inventive process. Developers may claim patent rights if the AI-generated invention is seen as a product of their software’s code and algorithms, while end users might argue that their input or decisions, which influenced the AI's output, grant them ownership. This highlights the need for clearer IP laws to define ownership criteria when both developers and users contribute to the creative or inventive process . Legal scholars suggest that IP law reforms could clarify ownership based on human involvement or create shared ownership models that recognize contributions from both parties, helping prevent prolonged legal disputes.
2.4 Case Studies in AI-Driven Intellectual Property Law
The emergence of AI-generated intellectual property has led to several landmark cases and legal precedents that provide critical insights into ownership disputes and the evolving landscape of IP law. Two significant cases—the DABUS case and the Naruto v. Slater case—illustrate the challenges and considerations surrounding AI-generated works.
The DABUS Case[9]
The DABUS case represents a key moment in AI and intellectual property law. Dr. Stephen Thaler, creator of the AI system DABUS, filed patent applications in the U.S., Europe, and Australia, naming DABUS as the inventor of two inventions: a beverage container and a flashing light for search-and-rescue operations. Thaler argued that DABUS autonomously generated these inventions without human input, challenging the traditional requirement of human inventorship in patent law.
Patent offices in the U.S. and Europe rejected the applications, stating that patent law mandates that only humans can be inventors. The USPTO emphasized that invention must originate in a human mind (USPTO, 2020). However, the Australian Federal Court ruled in favor of Thaler, suggesting that the law could evolve to accommodate AI inventorship, though this decision remains isolated in a predominantly human-centric legal framework[10].
The DABUS case highlights the need for patent law reform to address AI-generated inventions and raises important questions about the role of AI in the innovation process.
The Naruto v. Slater Case [11]
The Naruto v. Slater case offers a significant precedent, though it does not directly involve AI. In this case, a photographer sought copyright protection for a famous selfie taken by a monkey, Naruto. The Ninth Circuit Court ruled that non-human entities cannot hold copyright, reinforcing that copyright protection is limited to human authors (U.S. Court of Appeals for the Ninth Circuit, 2018).
These cases highlight critical challenges in ownership disputes related to AI-generated works. The DABUS case emphasizes the need for a reevaluation of inventorship criteria, while Naruto v. Slater reinforces the principle that U.S. copyright law requires human authorship. Together, they underscore the need for legislative reform to adapt IP frameworks to the increasing role of AI in creative processes.
As AI continues to impact intellectual property creation, these legal precedents provide important guidance for policymakers, legal scholars, and creators. They stress the importance of developing a balanced approach that acknowledges both human creativity and AI's contributions, ensuring IP laws remain relevant amidst technological advances.
3. ETHICAL DILEMMAS IN AI-BASED IP PROTECTION & SURVILLANCE
3.1 Surveillance Technology and the Erosion of Privacy
3.1.1 Data Collection Ethics
As AI technologies play a central role in automated surveillance systems, ethical concerns about data collection practices have become more prominent. These systems often rely on algorithms that analyze large amounts of personal data, such as images, communications, and behavior patterns, raising dilemmas related to consent, transparency, and accountability. A key ethical issue is that individuals should be fully informed about how their data is collected and used, yet in many cases, data is gathered without explicit consent, undermining privacy and autonomy .
Many surveillance technologies, such as facial recognition systems, lack transparency in their data collection methods. These systems may capture individuals' images in public spaces without their knowledge, contributing to concerns about pervasive surveillance and the erosion of privacy. The aggregation of data to create profiles or track behaviors further exacerbates the risk of biased or discriminatory outcomes based on race, ethnicity, or other sensitive characteristics .
The ethical responsibility also extends to the developers and operators of AI surveillance systems, who must ensure that their technologies are used ethically and do not infringe on individuals' rights. Without comprehensive ethical guidelines for data collection and usage, there is a risk of power abuse, where data is exploited for unintended purposes. This underscores the need for clear ethical frameworks and regulatory measures to govern AI-driven surveillance, ensuring transparency, accountability, and respect for privacy.
3.1.2 Impacts on Privacy Rights
AI-powered intellectual property (IP) monitoring and automated surveillance pose significant risks to privacy, raising concerns about invasive practices that infringe on personal freedoms. AI systems that track, monitor, and analyze large volumes of data to enforce IP rights can also lead to excessive scrutiny of individuals, potentially undermining privacy. While intended to protect IP, these practices may infringe on personal privacy by monitoring user behavior and tracking online activities .
One major concern is the chilling effect of constant surveillance on individual freedoms. When people know they are being monitored, they may alter their behavior, limiting their willingness to express themselves or engage in creative activities. This self-censorship harms personal autonomy and undermines free speech and creativity, essential for fostering innovation. Furthermore, the misuse of personal data collected through surveillance can result in identity theft, unauthorized access to sensitive information, and other privacy violations, negatively impacting individuals' lives.
The societal implications of privacy violations linked to AI-driven IP monitoring are also troubling. Overly invasive surveillance can perpetuate discrimination and reinforce power imbalances, disproportionately affecting marginalized communities and exacerbating social inequalities through biased data analysis [12].
To address these risks, comprehensive legal frameworks are needed to protect privacy rights in AI-driven IP monitoring and surveillance. By emphasizing transparency, accountability, and respect for personal autonomy, society can balance the need to protect IP with the preservation of fundamental privacy rights.
3.2 Bias in AI-Driven IP Compliance
3.2.1 Algorithmic Bias and Fairness
Algorithmic bias poses significant challenges in AI-driven IP compliance, potentially leading to unfair outcomes for individuals and businesses. AI systems rely on training data to make predictions, and if this data is biased or unrepresentative, the resulting algorithms can perpetuate or amplify existing inequalities. In the context of IP enforcement, biased algorithms can impact the detection of copyright infringement, the monitoring of violations, and the imposition of penalties. For instance, an AI system trained primarily on works from certain demographic groups may fail to fairly recognize content created by individuals from underrepresented backgrounds, resulting in disproportionate targeting of specific creators or content types while overlooking violations elsewhere (Eubanks, 2018). This undermines the fairness that should be central to IP law.
Algorithmic bias can also affect the outcomes of IP litigation. AI systems used in predictive analytics to assess litigation outcomes may favor certain litigants based on historical data that reflects systemic biases in judicial decisions. If past rulings show a pattern of favoring particular types of claims or parties, the AI may perpetuate these biases in its predictions, leading stakeholders to make flawed strategic decisions.
These biases threaten the credibility of IP compliance mechanisms, raising both legal and ethical concerns. To address this, stakeholders—including creators, developers, and policymakers—must recognize and mitigate algorithmic bias by diversifying training datasets, auditing algorithms for fairness, and involving diverse perspectives in AI system development and deployment.
3.2.2 Transparency in AI Decision-Making
Transparency is essential to ensuring fairness in AI-based IP decisions, particularly when addressing algorithmic bias and compliance. AI systems used for IP rights enforcement and compliance monitoring must provide stakeholders with insight into their algorithms, training data, and decision-making processes. Without transparency, it becomes difficult to hold AI systems accountable and ensure they operate justly. A key issue is the "black box" phenomenon, where the complexity of AI algorithms obscures the rationale behind decisions, making it challenging for individuals and organizations to challenge or appeal potentially biased outcomes. For example, if an AI system flags a creator’s work as infringing on IP rights, the creator should have access to the reasoning behind the decision, including the criteria and data used.
Transparency not only allows stakeholders to assess whether AI systems are acting fairly, but also builds trust. When creators and developers understand how AI works and can verify its fairness, they are more likely to engage positively with these technologies. This trust is vital as AI adoption in IP compliance grows. To enhance transparency, best practices should include clear documentation of algorithms and training data, regular audits to detect biases, and active stakeholder involvement in the development and oversight of AI systems.
In conclusion, addressing algorithmic bias and ensuring transparency in AI-driven IP compliance are critical for fostering fairness. By prioritizing transparency, stakeholders can use AI to promote justice and equity, rather than perpetuating existing inequalities.
4. REGULATING AI-GENERATED CONTENT: POLICY APPROACHES FOR IP LAW
4.1. New Legal Frameworks and Adaptations for AI
4.1.1 Expanding of IP definition
The rise of AI technology challenges existing intellectual property (IP) frameworks, requiring a reevaluation of how IP law defines and protects creations. Traditional IP law, focused on human authorship and invention, must adapt to include AI-generated works. For example, copyright law currently restricts authorship to humans (WIPO, 1971), but expanding this definition to include both human and AI contributors would clarify ownership and reduce legal ambiguity.
Similarly, patent law must address AI-driven innovations. The current criteria for patentability—novelty, non-obviousness, and utility—are human-centric. A potential solution is to create a distinct category for AI-generated inventions, ensuring appropriate protection. Additionally, as AI and humans collaborate, joint authorship models should be considered to fairly distribute rights .
Policymakers must establish clear guidelines for AI-generated works, including ownership rights. Assigning ownership to the human creator or user of AI systems provides clarity, while a shared ownership model between developers and users could also be viable .
Transparency in AI systems is also critical. Regulations should require disclosure of data sources, algorithms, and decision-making processes, fostering ethical practices and ensuring fairness in IP enforcement. Ethical guidelines should address algorithmic bias, privacy risks, and misuse in surveillance, ensuring responsible AI deployment and protecting creators' rights .
Finally, funding research into AI’s impact on IP law is essential for developing adaptive legal frameworks that keep pace with technological advancements. In conclusion, AI-specific policies in IP law should clarify ownership, promote transparency, ensure ethical practices, and support research to create a balanced framework that fosters innovation while safeguarding creators' rights.
4.1.2. Incorporating AI-Specific Policies
As AI technologies become integral to creative processes, the need for AI-specific policies within IP law is increasingly critical. Policymakers must address the challenges posed by AI-generated works, focusing on innovation, protection, and ethical considerations. A key area for policy development is the establishment of guidelines for the ownership of AI-generated works. Given that traditional IP laws do not clearly define rights for AI outputs, policies should specify that ownership defaults to the human user or creator of the AI, or alternatively, introduce a shared ownership model between AI developers and users .
Transparency and accountability in AI-generated content are also essential. Regulations should require AI systems to disclose data sources, algorithms, and decision-making processes, promoting ethical AI usage and providing a basis for challenging decisions, especially in copyright and patent contexts . This transparency would ensure that creators and users can trace the origins of AI-generated works and prevent misuse.
Ethical guidelines are equally important. Policymakers should establish standards to govern the ethical use of AI in content creation and IP enforcement. These guidelines should address issues like algorithmic bias, privacy concerns, and the potential for misuse in surveillance and enforcement, ensuring AI technologies respect creators' rights while fostering innovation .
Finally, AI-specific policies should include provisions for ongoing research into AI's implications on IP law. Policymakers can promote innovation by funding research into new models of IP protection and compliance for AI-generated works, contributing to a more adaptive legal framework.
In conclusion, AI-specific policies within IP law are essential for addressing the challenges of AI-generated works. By clarifying ownership, promoting transparency, ensuring ethical practices, and supporting ongoing research, policymakers can create a legal environment that fosters innovation while protecting creators' rights.
5. CHALLENGES IN IMPLEMENTING AI-CENTRIC IP POLICIES
The rapid advancement of AI technologies has exposed significant gaps in intellectual property (IP) law, underscoring the need for legal reforms. Traditional IP laws are based on human authorship, but when AI systems autonomously generate content, such as text, images, or inventions, questions about authorship and ownership arise. Existing laws generally do not account for AI as a creator, leading to uncertainty around ownership rights. For instance, the U.S. Copyright Office has stated that works created solely by AI, without human intervention, are not eligible for copyright protection [13]. This legal void creates challenges for creators relying on AI, potentially hindering innovation. Additionally, the "black box" nature of AI algorithms complicates IP enforcement, especially in cases involving potential copyright infringement or derivative works, raising concerns about originality and fair use .
The issue of liability in IP infringement cases involving AI systems further complicates the situation. It remains unclear whether responsibility should lie with the AI developer, the user, or the AI itself, leaving stakeholders uncertain of their legal standing . As policymakers work to develop AI-specific IP policies, they face the challenge of balancing innovation with regulation. While regulation is necessary to address issues like misuse, bias, and IP infringement, overly restrictive measures could stifle innovation. Policymakers must strike a balance that encourages responsible AI development while safeguarding ethical standards and legal protections, ensuring that IP laws evolve to meet the complexities of AI-generated works.
6. CONCLUSION
This paper explored the intersection of AI and intellectual property (IP) law, underscoring the profound impact AI technologies are likely to have on existing legal frameworks. It highlighted the urgent need for continuous updates to IP law to address emerging challenges as AI evolves. The discussion centered on the complexities surrounding AI-generated works, particularly the questions of ownership, copyright, and patent law, as well as the ongoing tension between human creators and AI developers. Through case studies and legal precedents, the paper illustrated how courts are beginning to navigate disputes over AI-generated content and offered insights into the evolving legal landscape.
Ethical considerations were also examined, particularly privacy concerns tied to automated surveillance and the risks of algorithmic bias in AI-driven IP enforcement. The paper emphasized the potential threats to individual rights and the importance of ethical practices in the use of AI technologies within IP enforcement.
The role of policymakers in shaping AI-related IP laws was discussed, stressing the need for flexible regulatory frameworks that can adapt to technological advancements. The paper highlighted the importance of transparency, accountability, and collaboration among legal professionals, industry leaders, and civil society to create inclusive and effective governance strategies.
Looking forward, the paper underscored the necessity of ongoing education and the reevaluation of legal concepts to ensure that IP law remains responsive to AI's rapid development. In conclusion, it emphasized that the relationship between AI and IP law is a dynamic and evolving field, requiring continuous engagement from all stakeholders to maintain a legal system that is both effective and equitable.
REFERENCE
U.S. Court of Appeals for the Ninth Circuit. (2018). Naruto v. Slater. Retrieved fromhttps://cdn.ca9.uscourts.gov/datastore/opinions/2025/04/23/16-15469.pdf
European Patent Office. (2020). Decision on the designation of inventor in DABUS case. Retrieved from https://www.epo.org/law-practice/legaltexts/decisions/2025
United States Patent and Trademark Office. (2020). USPTO decision on DABUS patent application. Retrieved fromhttps://www.uspto.gov/patents/laws
Deeks, A., Allen, G., & Catanzaro, B. (2020). Machine Learning and the Law: Part II—Inventions and Inventorship. Harvard LawReview, 133(5), 122-145. Retrieved from https://harvardlawreview.org/
Samuelson, P. (2019). Allocating Ownership Rights in AI-Generated Content: Challenges and Risks. Journal of Law and Technology, 15(2), 98- 123. Retrieved from https://jolt.org/15-2/98-123
Zhu, H. (2021). Privacy in the Age of Artificial Intelligence: Regulatory Challenges for Automated IP Enforcement. Computer Law&SecurityReview, 41(3), 99-115. DOI: 10.1016/j.clsr.2025.105422
Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15-25. DOI: 10.1016/j.bushor.2018.08.004
United States Patent and Trademark Office (USPTO). (2020). USPTO decision on DABUS patent application. Retrievedfromhttps://www.uspto.gov/patents/laws
Gervais, D. (2020). The machine as author. IIC - International Review of Intellectual Property and Competition Law, 51(6), 745–768. DOI: 10.1007/s40319-020-00954-7
10. Abbott, R. (2020). I Think, Therefore I Invent: Creative Computers and the Future of Patent Law. Boston College Law Review, 61(6), 1933-1983. DOI:
U.S. Copyright Office. (2019). Compendium of U.S. Copyright Office Practices, Third Edition.
Samuelson, P. (2019). Reconceptualizing authorship in the age of AI. Harvard Journal of Law & Technology, 33(2), 487-536. DOI: 10.2139/ssrn.3386094
. Ginsburg, J. (2021). Legal authorship and copyright in an AI world: Reconciling human and AI contributions. Journal of Law and Innovation, 3(1), 100-123. DOI: 10.2139/ssrn.3775085 1
Thaler, S. (2020). Inventorship, IP, and machine autonomy: Perspectives from the DABUS case. European Intellectual Property Review, 42(5), 225-234.
Thaler, S. (2021). Thaler v. Commissioner of Patents. Australian Federal Court. Retrieved from https://www.austlii.edu.au/cgi- bin/viewdoc/au/cases/cth/FCA/2021/530.html
Bennett, C. J. (2020). Privacy, technology, and the regulation of surveillance: New challenges and approaches. International Reviewof Law, Computers & Technology, 34(2), 119-137. DOI: 10.1080/13600869.2020.1784627
Lyon, D. (2018). The Culture of Surveillance: Watching as a Way of Life. New York: New Press.
Oberlander, J., & Tsvetkova, M. (2021). Surveillance and discrimination: Implications for AI and machine learning. AI & Society, 36(1), 27-40. DOI: 10.1007/s00146-020-00988-7
Regan, P. M. (2019). Privacy, data protection and surveillance: The role of technology. Journal of Law, Technology & Policy, 2019(1), 1-38. DOI: 10.2139/ssrn.3358448
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs.
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica. Retrieved from https://www.propublica.org/article/machine- bias-risk-assessments-in-criminal-sentencing
Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104(3), 671-732. DOI: 10.15779/Z38D50J
Burrell, J. (2016). How the machine 'thinks': Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1-12. DOI: 10.1177/2053951715622512
Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin's Press.
Harris, T. (2020). The Challenges of Fair Use in the Age of AI: Copyright, Technology, and Public Policy. Harvard Journal of Law&Technology, 33(2), 350-400. Retrieved from https://jolt.law.harvard.edu/assets/articlePDFs/v33/Harris.pdf
26. European Commission. (2021). White Paper on Artificial Intelligence: A European approach to excellence and trust.
Samuelson, P. (2017). The Copyright Office's Call for Comments on Copyright and Artificial Intelligence. Communications of the ACM, 60(11), 21-23. DOI: 10.1145/3132765
U.S. Copyright Office. (2019). The U.S. Copyright Of ice's Report on Copyright Registration for Works Created by Artificial Intelligence. Retrieved from https://www.copyright.gov/policy/artificial-intelligence/report.pdf 31. USPTO (United States Patent and Trademark Office). (2020). Guidance on the Impact of Artificial Intelligence on the Patent System. Retrievedfrom https://www.uspto.gov/sites/default/files/documents/AI_Patent_Guidance.pdf
Wright, A., & Kira, M. (2021). The Intersection of Artificial Intelligence and Intellectual Property: New Challenges for IP Lawand Policy. Stanford Technology Law Review, 24(1), 1-35. Retrieved from https://stlr.stanford.edu/pdf/wright-kira-the-intersection-of-artificial-intelligence- and-intellectual-property.pdf
WIPO (World Intellectual Property Organization). (2021). World Intellectual Property Report 2021: Tracking the Digital Transformation. Retrieved from https://www.wipo.int/publications/en/details.jsp?id=4658 34.
Cohen, J. E., & Lemley, M. A. (2020). Copyright in an Age of Artificial Intelligence. Stanford Law Review, 72(1), 1-34. Retrievedfromhttps://www.stanfordlawreview.org/wp-content/uploads/2020/01/Cohen-Lemley-Copyright-in-an-Age-of-Artificial-Intelligence.pdf
.Drahos, P. (2021). A Philosophy of Intellectual Property. New York: Palgrave Macmillan. DOI: 10.1007/978-3-030-55727-0
Harrison, J. (2020). AI and Intellectual Property: Navigating Ownership and Rights. Harvard Journal of Law & Technology, 34(1), 123-158. Retrieved from https://jolt.law.harvard.edu/assets/articlePDFs/v34/AI-and-Intellectual-Property-Navigating-Ownership-and-Rights.pdf
[1] 9 TH CIR. APR. 23, 2018
[2] Kaplan & Haenlein, 2019
[3] U.S. Copyright Office, 2019
[4] 9 TH CIR. APR. 23, 2018
[5] U.S. Court of Appeals for the Ninth Circuit, 2018
[6] European Patent Office, 2020
[7] Gervais, 2020; Abbott, 2020
[8] Reports of Patent, Design and Trade Mark Cases, Volume 48, Issue 13, 26 August 1931, Pages 447–460
[9] UKSC/2021/0201
[10] Thaler v Commissioner of Patents [2021] FCA 879
[11] 9 TH CIR. APR. 23, 2018
[12] Oberlander & Tsvetkova, 2021
[13] U.S. Copyright Office, 2022