Before the law can adapt, artificial intelligence is already reshaping how decisions are made, actions are carried out, and harm can occur. In this thought leadership piece, Errol Price, Director Legal at Symmetra, explores how rapidly evolving AI systems - from generative tools to agentic and physical AI - are exposing gaps in existing legal doctrines, and why courts, legislators and regulators may soon need to reconsider whether treating AI purely as a legal “object” remains fit for purpose.
Technological changes frequently reveal gaps and fault lines in legal structures and doctrines. The task then is for courts, legislators and regulators to plug the gaps or sometimes take steps to revise existing legal principles. This is certainly the case with artificial intelligence (AI) which has been described as the most consequential technological development in the history of mankind.
The rate of advance in AI is astounding. The widely used Generative AI platforms perform as a software tool which respond to queries or directions and create content, answer questions and give information. We are now moving relentlessly into the era of Agentic AI, a system that absorbs input from the environment, processes it, determines, on its own, one or more goals to be achieved and then executes a self-generated plan to achieve the goals.
To add a further layer of complexity, we are experiencing an explosion in the development of Physical AI where AI software is embedded in machines which act in the real world. Examples are driverless cars, warehouse packing systems and humanoid robots.
Currently, Australian law and law in comparable jurisdictions, treats AI systems as tools or legal objects rather than legal subjects capable of bearing rights and obligations.
This principle has been applied in recent years to DABUS, an AI system developed by American, Dr Stephen Thaler. DABUS had ostensibly invented a new food container and a rescue beacon. Patent registration applications were filed naming DABUS as inventor in Australia, Germany, New Zealand, the US, the UK and the EU. The courts in all jurisdictions rejected the applications. In Australia the full Federal Appeal Court (Commissioner of Patents v Thaler [ 2022] FCAFC 62) held that an inventor under the Patents Act, 1990 [Cth] could only be a natural person (referencing-- D’Arcy v Myriad Genetics (2015) CLR 334 where the majority of the High Court stated that an invention must be brought about by human action).
The Australian Federal Court did however add: “In our view, there are many propositions that arise for consideration in the context of artificial intelligence and inventions. They include whether, as a matter of policy, a person who is an inventor should be redefined to include an artificial intelligence" (para. 119).
In December,2025, the Australian Government issued its National AI Plan, setting out how AI issues should be addressed from a legal and regulatory standpoint. The position taken, for the time being, is that all questions and challenges should be addressed within the existing legal framework. However, the Government also announced the creation of an AI Safety Institute, which amongst other things would investigate legal reform pertaining to AI.
As AI systems infiltrate more and more aspects of our lives the challenge which will have to be faced is whether it makes any legal sense to treat AI purely as a legal object. The fault lines and stresses are already apparent at a stage when AI is really only in its infancy. If AI is simply a tool or legal object then, by definition it is subject to human volition and direction. However, we know that this is not the case- even now.
The areas of law where AI could significantly impact legal rights and outcomes are numerous. To mention just a few : Contracts (agreements between two automated systems are triggered when some condition is met); Copyright (AI writes a new poem or copies material owned by some-one else); Tort (an AI -powered vehicle collides with a pedestrian); Discrimination (an AI vetting program excludes a job applicant based on a protected attribute of the job-seeker)
Does the law need to address these issues with an overarching set of principles rather than leaving courts to fit inevitable problems into legal doctrines which are ill-suited for the task. If so, could and should AI be treated as an entity having legal personality? AI has no biological features so it can clearly not be treated as or equivalent to a natural person. Is it possible then for AI to attain some form of juristic personality?
Since the "persona ficta” of Roman Law, juristic legal constructs having legal personality have been used for various purposes by many legal systems. In both common law and civil law systems entities such as corporations and municipalities enjoy separate legal identity. But these, of course, are fundamentally distinguishable from AI in that they always have identifiable human actors behind them and usually possess assets or insurance to cover harm caused.
In the UK a discussion paper issued by the Law Commission on AI and the law (July, 2025) took note of the ‘liability gaps’ existing in current law with respect to the treatment of AI. It raised the possibility of granting legal personhood to at least some forms of AI systems noting that there are significant conceptual and practical pros and cons to admitting AI as a legal person into our existing frameworks.
Recognising AI as a legal person would mark a profound shift in how the law understands, agency, responsibility and power. The challenge will be to balance accountability without sacrificing the intrinsic connection of the law with human values. It is bound to be an area of much debate in coming years.
Errol Price, Director Legal, Symmetra Pty Ltd Errol Price’s decades of experience in commercial law, and specifically as an advisor to leading companies on equity, discrimination and workplace relations issues add significant value to Symmetra’s understanding of the complexities of the workplace. His track record in formulating human resource and workplace relations policies for many multinational and blue-chip companies as well as advising clients on the impact of equity and anti-discrimination have helped position Symmetra as one of the leading consultancies on diversity and inclusion. More recently he has specialised in the law pertaining to discrimination, harassment and bullying in the Australian workplace. This has provided the legal foundation for Symmetra’s highly successful diversity, EEO and anti-bullying and harassment programs, delivered across Australia for the past 10 years. Errol conducts workshops for public and private sector organisations in Australia on dealing with unlawful and inappropriate behaviour. He advises organisations on managing bullying and designing harassment policies and helped establish the complaints handling processes for a large NSW state department. Errol is regularly invited by leading organisations providing continuing legal education for practitioners, such as Legalwise and ICLE, to deliver presentations on selected legal topics.