How your organisation can use AI systems securely

13/02/2024

The Australian Cyber Security Centre recently released a publication, in collaboration with international partners, to provide guidance for organisations on how to use Artificial Intelligence (AI) systems securely, including identifying key risks and how to mitigate them.

Here is a summary of the key considerations for the development, procurement, implementation, and ongoing use of AI systems.

While AI has the potential to create opportunities, it can also introduce new organisational risks and can cause significant harm, particularly in high-risk settings like healthcare, recruitment, and critical infrastructure. Leaders need to understand AI-related risks so they can be mitigated through proper governance policies and processes.

Background

Australia has bold ambitions to be an early adopter and leader in the regulation of Artificial Intelligence (AI). Organisations across all sectors, particularly in knowledge-intensive industries, are looking at ways to capitalise on AI to drive efficiency and improve operations.

AI-related risks

The US National Institute of Standards and Technology’s (NIST) AI Risk Management Framework outlines AI-specific threats, including types of cyber attacks which target AI systems. These threats include:

  • Data poisoning – when training data is manipulated to negatively impact the performance of machine learning tools and systems, for example, providing inaccurate, biased, or malicious outputs.
  • Prompt injection attacks – attempts insert malicious instructions or hidden commands into an AI system to hijack the AI model’s output and jailbreak the AI system. Jailbreaking is a form of hacking which bypasses an AI’s guardrails to elicit prohibited information, for example personal or proprietary information.
  • Hallucinations – when incomplete or biased training data causes an AI model to learn incorrect patterns and generate information that is not accurate or factually correct.
  • Privacy and intellectual property concerns – challenges ensuring the security of sensitive data an organisation holds, including proprietary information and customers’ personal data.
  • Model stealing attacks – when a malicious actor provides inputs to an AI system and uses outputs to create a copy of the AI model. As AI models can require significant investment, this poses serious intellectual property concerns.
  • Data drift – when the data an AI system encounters in the ‘real world’ differs from the data a system is trained on, leading to degradation in the AI’s performance.

Managing risks

The Australian Securities and Investments Commission (ASIC) and Australian Prudential Regulation Authority (APRA) are intensifying their focus on technology and data-related risks, including holding directors and officers accountable for exposing companies to legal breaches arising from inadequate risk management. This underscores the need for companies to seek legal advice on implementing proactive measures to mitigate AI risks before they materialise.

Organisations considering using AI should develop an AI policy to underpin the development, procurement, implementation, and ongoing use of AI. This policy should operate in tandem with existing technology and data governance policies.

Cyber security

As an emerging technology, there are limited existing regulations to ensure that AI systems are secure. It is important that organisations consider the cyber security implications of AI systems. Leaders should adopt a ‘secure-by-design’ approach to consider threats from the outset and build mitigations into the development of AI systems. Consider, for example, following the National Cyber Security Center-UK’s Guidelines for secure AI system development.

  • Has your organization implemented relevant cyber security frameworks?
  • Has a security risk assessment been done?
  • Can your organisation implement a trial of the AI system to test firewalls, gateways, logging and monitoring systems?
  • Does your organisation have a data breach response plan or business continuity plan?

Vendor management

Supply chains of AI systems can be incredibly complex and carry inherent risks. It is fundamental for leaders to conduct due diligence and ensure AI vendors have appropriate risk management processes.

  • Is your vendor transparent about how the AI system is developed and tested?
  • Have you conducted a supply chain evaluation to assess third-party risks?
  • Have you familiarised yourself with any up-time or availability commitments made by the vendor?
  • Are incident management responsibilities clearly defined in your contract?
  • How will data be managed once the commercial agreement ends?

Privacy

Consider how AI systems collect, process and store data and how this may impact your privacy and data protection obligations.

  • Are there privacy-enhancing technologies that can be used to protect data?
  • Has a privacy impact assessment been done to ensure compliance with the Privacy Act?
  • Are there vulnerable people using the AI system, including children and groups that have historically experienced discrimination or bias?
  • Will your organisation use a private version of the AI system?
  • Will personal and proprietary data be used to train the model?

Data governance

The more data that is accessed and generated, the more risks that accumulate for an organisation. Leaders should ensure central coordination of data activities and weave data governance into the fabric of every business process.

  • Is the AI system hosted in the cloud?
  • Is data sent overseas?
  • Are access rights to the AI model and training data granted on a need-to-know basis?
  • Are accounts routinely revalidated and disabled after a set period of inactivity?
  • Does your organisation enforce phishing-resistant multi-factor authentication?
  • How will your organisation manage backups?

Transparency and accountability

Transparency, or explainability, refers to the ability to understand why and how an AI model makes decisions. In the regulatory space there are calls for ‘algorithmic’ transparency, however, understanding the complex inner workings of constantly evolving AI systems is not always possible or feasible. Most AI systems are ‘black box AI’, which means they have opaque internal decision-making processes.

A better approach to AI risk management is to assess the inputs and outputs and identify potential harms. For example, is the AI system prone to errors, biases, or hallucinations? What are the risks of certain inputs like sensitive or commercial information?

  • Do you understand the general limits and constraints of the AI system?
  • Is your organisation adequately resourced to securely set-up, maintain, and use the AI system?
  • Who are the staff that will interact with the system and how will they be trained?
  • What data can and cannot be used in the AI system?

Ongoing monitoring and assurance

As AI is an evolving technology and risks can materialise across the AI lifecycle, a central tenet of AI governance is ongoing monitoring and assurance.

  • Will you or your vendor conduct periodic health checks of the AI systems to detect ‘data drift’, errors and biases?
  • Will you log and monitor outputs to detect changes in behaviour and performance?
  • How will you identify if something goes wrong with the AI system like an adversarial attack?
  • How will you ensure compliance obligations are met?

We work with government, business and not for profits to provide practical advice and innovative solutions to emerging issues including AI. From policy and procedure development to audits and risk assessments, we can help ensure your organization manages the opportunities and risks that AI presents. For further information please contact us at enquiries@griffinlegal.com.au.

New proposed digital identification laws

In recent weeks the Australian Government introduced legislation (the Digital ID Bill) to establish a regulatory regime for the accreditation of digital identification providers. Overall, the Digital ID Bill aims to give Australians secure and effective ways in which to verify their identify for use in online transactions with government and business.