Artificial Intelligence puts Australian Patent and Copyright Law in a Bind

As artificial intelligence accelerates into domains once thought uniquely human, Australia’s intellectual property laws are being pushed to their limits. In this article, Errol Price, Director Legal at Symmetra Pty Ltd, examines a growing tension at the heart of the system: modern AI can generate inventions and creative works with little or no human involvement, yet the law continues to recognise only human inventors and authors. Recent decisions in Australia and overseas have reaffirmed this principle, but they have also exposed a deeper gap in the legal framework - one that raises critical questions about ownership, incentives, and the very purpose of IP protection in an era where machines can create independently.
 
WEB2510N12 - AI for Litigation

The rapid emergence of generative artificial intelligence has exposed a difficult and unresolved problem within intellectual property law. Modern AI systems are increasingly capable of generating technical inventions, artistic works, and designs with minimal or even no direct human input. Yet the legal frameworks governing patents and copyright in Australia — as in most common law jurisdictions — remain firmly built around the assumption that creative or inventive activity originates from a human being.

Recent litigation across numerous jurisdictions has reaffirmed a central principle of intellectual property doctrine: that authors and inventors must be human. However, this clarification of the law has also revealed a deeper structural challenge. If artificial intelligence can produce valuable inventions or creative works autonomously, the existing legal framework may struggle to determine whether — and how — such outputs should be protected.

The result is emerging as one of the most significant doctrinal questions facing patent and copyright law.

The Thaler litigation and the limits of current law

The issue came to prominence through litigation brought by the American computer scientist Stephen Thaler, who attempted to obtain patent protection for inventions allegedly generated by an AI system known as DABUS (Device for the Autonomous Bootstrapping of Unified Sentience). Thaler argued that the inventions were created autonomously by the system and that DABUS should therefore be recognised as the inventor.

In Australia, the litigation briefly appeared to open the door to such recognition. In 2021, a single judge of the Federal Court suggested that the concept of an inventor might be capable of including an artificial intelligence system. However, that decision was overturned on appeal. The Full Court of the Federal Court subsequently held that, under the Patents Act 1990 (Cth), the term “inventor” refers to a natural person.

Parallel litigation in the United States and the United Kingdom produced the same outcome. Courts in both jurisdictions concluded that the relevant patent statutes require an inventor to be a human being.

Attempts to secure copyright protection for works generated by artificial intelligence have similarly failed. In the United States, for example, courts have confirmed that copyright subsists only in works created through human authorship.

Australian copyright doctrine reflects the same principle. The Copyright Act 1968 (Cth) has long been interpreted as requiring human intellectual effort as the foundation of copyright protection.

Taken together, these decisions establish a clear position: under existing law, artificial intelligence cannot be an inventor or an author

While the courts have clarified the position under existing statutes, the decisions also expose a significant gap. The difficulty arises when an artificial intelligence system produces a novel technical solution or creative work without a human being who can meaningfully be described as the inventor or author.

In such circumstances, it is unclear whether the resulting output can attract intellectual property protection at all. Patent law requires the identification of an inventor. Copyright law requires the identification of an author. If neither exists in human form, the output may fall outside both regimes.

This possibility creates what some scholars have described as an “inventorship gap” or “authorship gap”. The law has been constructed on the assumption that creative or inventive activity originates in the human mind. Artificial intelligence challenges that assumption.

Can artificial intelligence invent or create independently?

Technologically, there is little doubt that modern AI systems can generate novel outputs.

Generative models can design engineering components, discover chemical compounds, produce software code, and create artistic works such as images, music, or literature. Some systems operate through large-scale exploration of design possibilities, testing and refining potential solutions through machine learning techniques.

In theory, such systems could produce inventions that satisfy the traditional criteria for patentability — novelty, inventive step, and industrial applicability. Similarly, they could produce creative works that meet the originality threshold required for copyright protection. The fundamental challenge is how the law attributes authorship or inventorship when no human can clearly be said to have created them.

The incentive problem

This issue goes beyond technical doctrine and touches the policy foundations of intellectual property law.

Patent law has traditionally been justified as a bargain between society and inventors. Inventors disclose their inventions to the public, and in return they receive a temporary monopoly over the use of the invention. The purpose of this arrangement is to encourage innovation and the dissemination of knowledge. However, artificial intelligence does not respond to incentives. It does not require financial reward or recognition. If inventions can be generated autonomously by machines, the traditional rationale for granting exclusive patent rights may become less clear.

At the same time, denying patent protection for AI-generated inventions could create its own problems. Firms might respond by keeping such discoveries secret rather than disclosing them publicly. This would undermine one of the central functions of the patent system — the creation of a publicly accessible body of technical knowledge.

The law therefore faces a dilemma. Granting patents for machine-generated inventions may extend monopolies beyond their traditional justification. Denying patents may discourage disclosure.

The prospect of an “invention flood”

A further concern arises from the potential scale of AI-driven discovery. Artificial intelligence systems can explore enormous design spaces and generate thousands of candidate solutions in a relatively short period of time. If every such solution could be patented, the number of patent applications could increase dramatically.

This could create dense networks of overlapping patents — often referred to as “patent thickets” — which might make innovation more difficult rather than easier. Businesses could face significant legal complexity when attempting to develop new technologies in fields already crowded with AI-generated patents. Patent offices, already under pressure, might also struggle to examine such a volume of applications.

Ownership and accountability

Another unresolved question concerns ownership. In traditional patent law, the inventor is the initial owner of the invention and may assign rights to an employer or commercial partner. If no human inventor exists, it becomes difficult to determine how those rights arise in the first place.

Companies developing or operating artificial intelligence systems may claim ownership of the outputs, but the doctrinal basis for such claims is not always clear under existing law. This problem is particularly significant in industries such as pharmaceuticals, materials science, and advanced software development, where AI-driven discovery is becoming increasingly common.

Possible directions for reform

Although the courts have reaffirmed the human-centred nature of intellectual property law, policy discussions about potential reform are now underway in many jurisdictions.

One possibility is simply to maintain the current approach and treat artificial intelligence as a tool used by human inventors or authors. Under this model, intellectual property protection would depend on demonstrating sufficient human involvement in the creative or inventive process.

Another option would be to attribute inventorship or authorship to the human developer or operator of the AI system. This approach would recognise the role of those who design or control the technology.

More radical proposals have suggested creating a new, specialised form of intellectual property protection for AI-generated works. Such a regime would recognise the distinctive nature of machine-generated outputs while avoiding the conceptual difficulties of treating artificial intelligence as a legal person.

Finally, some commentators argue that AI-generated inventions should remain unprotected and enter the public domain. According to this view, machine-generated knowledge should be freely available to society.

A challenge for Australian intellectual property law

For Australia, the issue presents a significant policy challenge. The decisions of the Federal Court have confirmed that the existing statutory framework requires human inventors and authors. Yet technological developments are moving rapidly in a direction that may increasingly test that assumption.

If artificial intelligence systems become capable of producing valuable inventions and creative works without meaningful human input, Australian intellectual property law may eventually confront difficult choices. It must balance the goals of encouraging innovation, ensuring fair competition, and maintaining a coherent legal framework.

As artificial intelligence becomes more deeply integrated into scientific discovery and creative production, Australian lawmakers and courts may need to reconsider whether existing intellectual property doctrines remain adequate — or whether new approaches will be required to address a technological landscape in which invention and creativity are no longer exclusively human activities.

 

 

Errol Price

Errol Price, Director Legal, Symmetra Pty Ltd                                                                         Errol Price’s decades of experience in commercial law, and specifically as an advisor to leading companies on equity, discrimination and workplace relations issues add significant value to Symmetra’s understanding of the complexities of the workplace. His track record in formulating human resource and workplace relations policies for many multinational and blue-chip companies as well as advising clients on the impact of equity and anti-discrimination have helped position Symmetra as one of the leading consultancies on diversity and inclusion. More recently he has specialised in the law pertaining to discrimination, harassment and bullying in the Australian workplace. This has provided the legal foundation for Symmetra’s highly successful diversity, EEO and anti-bullying and harassment programs, delivered across Australia for the past 10 years. Errol conducts workshops for public and private sector organisations in Australia on dealing with unlawful and inappropriate behaviour. He advises organisations on managing bullying and designing harassment policies and helped establish the complaints handling processes for a large NSW state department. Errol is regularly invited by leading organisations providing continuing legal education for practitioners, such as Legalwise and ICLE, to deliver presentations on selected legal topics.