Thinking of using AI in your organisation: read this first – the legal issues associated with the use of AI Programs

26/05/2023

Similar to the advent of the personal computers and the internet, ChatGPT is on the path to revolutionise the way we work, and the speed in which work can be done. With this change comes impacts to a whole raft of current practices in business, which although disruptive, will, in the long term, see an increase in economic productivity.

ChatGPT is a language-based artificial intelligence (AI) program. AI developers aim to create programs that perform tasks generally performed by humans. The objective is for these programs to reduce errors, increase accuracy and efficiency and more economical than could be done by humans.

In the legal space, there is little doubt both lawyers and clients, will benefit from AI including ChatGPT and other language-based AI, as will many other industries. However, with the fast development and potential created by AI, comes significant legal and ethical issues. This article discusses some of the legal and ethical issues to be worked through as ChatGPT’s (and similar AI programs) influence grows.

Privacy and data protection
In Australia, the Privacy Act 1988, which includes the Australian Privacy Principles (APPs), governs the way Commonwealth Government agencies, organisations with an annual turnover of more than $3 million and some other organisations handle personal information. Aside from privacy law obligations, many commercial contracts also impose enforceable data protection obligations.

AI poses privacy and broader data protection issues including:
• its potential to deploy surveillance techniques on its users, such as AI-driven app mining the data held on your phone;
• a lack of informed consent, for example, not fully informing users of the uses and potential harms associated with an AI product; and
• the availability, use and disclosure of individuals’ personal information once it has been obtained by AI.

As the reaction of recent high profile data breaches have demonstrated, AI developers and users should be aware of increasing community expectations to protect personal information and our data. To avoid non-compliance with privacy law and reputational damage, any organisation implementing AI products, should ensure doing so will not see them breach their privacy law obligations. There are a number of steps that can be taken to safeguard against these risks including undertaking privacy impact assessments before making a decision to purchase or utilise AI programs in your organisation (and Griffin Legal can assist noting its leading expertise in privacy law in Australia).

Intellectual property
Intellectual property (IP) rights are another significant legal issue that users of AI programs should be aware of to avoid loss of an important organisational assets and the risk posed in breaching another person’s IP rights and associated expensive legal actions. In Australia, IP rights are protected by Commonwealth legislation, including the Copyright Act 1968, which provides legally enforceable rights to creators of works such as literary, dramatic, musical and artistic works.

One of the key IP issues AI raises is who owns the IP in a work? Is it the AI program itself, the creator or owner of the AI, or the person who asked the AI to create something?

At the moment at least, an AI program can’t own copyright in a work. This is because the Copyright Act protects works created by a ‘qualified person’, being a citizen or resident of Australia (section 32).

Another piece of IP legislation in Australia is the Patents Act 1990, which regulates the protection of any device, substance, method or process that is new, inventive and useful. The Courts have tested whether an AI program can own a patent or be noted as the inventor under the Patents Act, and concluded that it cannot. The Court said that “Only a natural person can be an inventor for the purposes of the Patents Act and Regulations.”1

Another AI intellectual property issue that is on the rise is an issue of AI using and adapting content that has already been created and in which IP rights already exist. For example, in the US, Universal Music Group has told popular music streaming services Spotify and Apple to block AI from scraping copyrighted songs, which can then be used to create AI-generated songs.2

Clearly IP and AI is complex and IP is certain to be an ongoing legal issue as AI programs evolve in capability and complexity, and users will have to be careful not to breach the intellectual property rights of creators by using AI to scrape content and generate new content.

In the meantime, organisations should ensure contractual documents are clear on the ownership of works to ensure there is a clear pathway out of any minor or complex ownership dispute, should it arise.

Liability

Which is a good segway into the next legal issue, what happens when something goes wrong? A significant emerging issue is the question of who or what is responsible when an AI program goes wrong?

‘Liability’ relates to the legal responsibility for a person’s actions or failure to act.

One example of what can go wrong involves self-driving (or ‘automated’) vehicles. These have the potential to make our roads safer (due to the removal of miscalculations, errors of judgment, speeding and phone use) and our air cleaner (due to better traffic coordination as vehicles ‘talk’ to each other and to traffic lights). Australia has in place a regulatory framework for automated vehicles3, which highlights that Australia’s laws don’t currently support the deployment of automated vehicles. This means that currently, someone who ‘operates’ an autonomous vehicle will be liable for any accident caused. But as laws slowly but inevitably change to permit autonomous vehicle use, there will be legal issues that have previously never been considered. For example, the traditional approach to insurance is to apportion liability based on which driver is at fault. The argument with autonomous vehicles is that the ‘driver’ can’t really be at fault. This means that insurers, civil courts and potentially criminal courts will have to consider the liability of the manufacturer, including where incorrect data has been transmitted or there have been errors in programming of the autonomous vehicle.

To answer who is liable, there are a number of questions to be asked including whether there is any applicable regulation, insurance policies or contracts in place; whether there was an error in the data that was inputted or whether there was flawed programming or instructions given to the AI program.

Returning to privacy law implications, while instant and continual data sharing will be the key to successful autonomous vehicle use, the data sharing will also inherently raise issues around privacy of occupants and security of the data as cyber security threats increase. This takes us back to the ‘Privacy and data protection’ issue and demonstrates that there is significant overlap when it comes to AI legal and ethical issues.

Unfairness, bias and discrimination

This is a huge ethical issue for AI. Just recently, a number of AI experts, tech personalities and ethicists called on a pause of AI “experiments”.4 They cited AI-facilitated propaganda, untruth, automation of fulfilling jobs and eventual rendering of humans as obsolete. This demonstrates that there are significant concerns even from AI insiders who seek AI governance, protocols to ensure safe systems and the establishment of AI regulatory authorities.

But why are people calling for changes to the way AI operates? The answer lies in part with the biases that are built into some AI programs that are currently in use. Examples include AI discrimination in recruitment, tenancy applications and criminal profiling. Unlawful discrimination is likely to land AI users in trouble with Commonwealth legislation, such as the Age Discrimination Act 2004 (Cth), Disability Discrimination Act 1992 (Cth), Racial Discrimination Act 1975 (Cth) and Sex Discrimination Act 1984 (Cth). Again the question of liability will arise.

Discrimination and bias are not only legal issues, they are also ethical issues. The internet has plenty of real life examples of bias in AI algorithms, such as bias against women in Amazon’s experimental automated recruitment system. Because men had dominated certain technical roles, the unidentified AI program taught itself that male candidates were preferable, regardless of the merits of candidates. It even reportedly downgraded applications that referred to the applicant being a member of a women’s club or women’s college.5 This is clearly not acceptable and demonstrates that AI developers and users alike must ensure their AI system complies with the law and community expectations.

AI getting it wrong – fake news

It seems that every week there are more examples of AI ‘getting it wrong’ or making things up. For example, it has been reported that a whistleblower has been labelled an offender by ChatGPT. Brian Hood revealed that bribes were being paid by Note Printing Australia – a wholly-owned subsidiary of the Reserve Bank – to win contracts. However, ChatGPT has been incorrectly describing Mr Hood as having been charged in connection with the scandal. Mr Hood is currently pursuing legal avenues against OpenAI, the creator of ChatGPT.6

In another instance, it has been reported in the United States that when asked to name legal scholars who have sexually harassed someone, ChatGPT returns a list including the name of law professor Jonathan Turley and citing a Washington Post article as the source of the information. The problem is, not only does Professor Turley deny the allegation and claim that he’s never been accused of sexual harassment, the article itself does not exist. That’s right, ChatGPT has fabricate a news article, giving a totally new meaning to the concept of ‘fake news.’7

Organisations should implement AI use policies and procedures as a matter of urgency to manage the risk that can come with use of AI.

Embracing AI

The Commonwealth Department of Industry, Science and Resources has developed a voluntary AI ethics framework, which includes eight AI ethics principles. Government and private sector organisations can apply the principles to build public trust and positively influence outcomes.8

But will this be enough? Given the legal and ethical risks that AI raises, likely not.

International jurisdictions are already looking at specific AI regulation. The Cyberspace Administration of China, for example, has developed draft measures that will make AI developers responsible for ensuring the validity of data used to train their AI tools and to update the technology when it generates incorrect content.9 And in California, a bill has been introduced proposing an Office to oversee the use of AI tools by government agencies.10

Moving forward, it will be important for organisations to cautiously embrace AI, or risk being left behind by competitors that are using AI tools to increase productivity, accuracy and efficiency. At the same time, it will be vital to ensure that use of the AI program complies with the law and community expectations. Protecting and upholding legal and ethical principles will provide organisations with a defensible and beneficial tool to integrate into their business systems.

Boards and senior executives should be ensuring appropriate governance mechanisms are in place to set the culture of use around AI and manage the many legal and ethical risks posed by AI. Staff should be provided with appropriate training, and policies and procedures should be in place to guide staff as AI is adopted. Contracts, including insurance contracts, should be properly reviewed and privacy and data protection framework should be updated.

Further information

For further information on liability and ethical issues associated with AI programs read our article.

Griffin Legal works with government, business and not for profits to provide solutions to emerging issues including AI Programs. We are experts in privacy law, commercial law and corporate governance. We provide Total Quality Service and deliver on practical advice and innovative solutions.

References

1 Commissioner of Patents v Thaler [2022] FCAFC 62.
2 ‘Streaming services urged to clamp down on AI-generated music’, Financial Times, 12 April 2023 https://www.ft.com/content/aec1679b-5a34-4dad-9fc9-f4d8cdd124b9.

3 National Transport Commission, ‘The regulatory framework for automated vehicles in Australia’ Policy paper, February 2022 <https://www.ntc.gov.au/sites/default/files/assets/files/NTC%20Policy%20Paper%20-%20regulatory%20framework%20for%20automated%20vehicles%20in%20Australia.pdf>.

4 Pause Giant AI Experiments: An Open Letter’, Future of Life Institute <https://futureoflife.org/open-letter/pause-giant-ai-experiments/>.

5 ‘Amazon scraps secret AI recruiting tool that showed bias against women’, Reuters, 11 October 2018 .

6 ‘Hepburn mayor may sue OpenAi for defamation over false ChatGPT claims’, ABC News, 6 April 2023, <https://amp.abc.net.au/article/102195610>.

7 ChatGPT invented a sexual harassment scandal and named a real law prof as the accused’, The Washington Post, 5 April 2023 <https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/>.

8 Department of Industry, Science and Resources, ‘Australia’s Artificial Intelligence Ethics Framework’, November 2019 https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework.

9 ‘US and China take first steps toward regulating generative AI’, Computerworld, 12 April 2023 https://www.computerworld.com/article/3693017/us-and-china-take-first-steps-toward-regulating-generative-ai.html.

10 As attention on AI increases, California ramps up oversight, Bloomberg Law, 23 February 2023 https://news.bloomberglaw.com/in-house-counsel/as-attention-on-ai-increases-california-ramps-up-oversight.

Parental Leave for Casual Employees

For casual employees the unpredictability of their employment can be a major source of stress as often casual employees miss out on many of the entitlements that full-time and part-time employees enjoy. For many, this concern is further exacerbated when they learn that they are about to become a parent. It should therefore be of …
Read more

Purchasing an Off-the-Plan Property

The interest in “off-the-plan” properties is ever increasing and is becoming more popular for buyers. An off-the-plan purchase is one where the Buyer enters into a contract to purchase a property that has not yet been constructed. Due to the prolonged settlement period for an off-the-plan purchase it is imperative for buyers and sellers to …
Read more