
In 2025, AI no longer only exists in dedicated applications like ChatGPT and OpenAI. It is now integrated with everyday applications, such as Microsoft Teams, Outlook, Word and Excel. Everyone is now using AI as part of work before they realise. Mundane daily tasks like reminders, notetaking and summary-making are automatically generated with the assistance of AI.
The Australian Government has implemented a comprehensive policy for the responsible use of AI, which took effect on 1 September 2024: Policy for the responsible use of AI in government (AI Policy). This policy is a first step in the journey to position government as an exemplar in its safe and responsible use of AI, in line with the Australian community’s expectations. It sits alongside whole-of-economy measures such as mandatory guardrails and voluntary industry safety measures.
The AI Policy is a step-up from the previous Australia’s Eight AI Ethics Principles published in November 2019. The AI Policy aims to ensure that government plays a leadership role in embracing AI for the benefit of Australians while ensuring its safe, ethical and responsible use, in line with community expectations. Key aspects of the AI Policy include:
- Accountability: Agencies must appoint accountable officials to oversee AI use and ensure compliance with the policy.
- Transparency: Agencies are required to publish transparency statements detailing how AI is used in their operations.
- Evaluation: Reviewing internal policies and approaches on an ongoing basis and monitoring AI use cases to assess for unintended impacts. Ensure ongoing compliance.
- Ethical Use: emphasis on the need for AI to be used ethically, with a focus on preventing harm and ensuring fairness.
- Public Trust: Building and maintaining public trust is a core objective, recognising that trust is essential for the successful adoption of AI.
In October 2024, the Office of the Australian Information Commissioner (OAIC) has also published Guidance on privacy and developing and training generative AI models, which includes checklists for privacy considerations when:
- planning and designing an AI model or system;
- collecting and processing the training dataset; and
- developing or training an AI model.
This Guidance from the OAIC prepares organisations for proposed AI related requirements in the Privacy Act 1988 which commence on 10 December 2026. Under the new requirements, APP entities will be required to include specific details in their privacy policies if personal information is used for automated decision-making purposes (proposed APP 1.7). This means if an entity uses a computer program to make, or substantially relate to making, a decision about an individual, and that decision could significantly affect the individual’s rights or interests, the entity must disclose this in their privacy policy.
The proposed APP 1.8 and 1.9 provide further clarifications around automated decision-making and transparency. APP 1.8 addresses the disclosure obligations when using AI for automated decision-making, while APP 1.9 provides following detailed examples on the types of decisions that could significantly affect an individual’s rights or interest:
- a decision made under a provision of an Act or a legislative instrument to grant, or to refuse to grant, a benefit to the individual (relevant to NFPs administering govt programs);
- a decision that affects the individual’s rights under a contract, agreement or arrangement;
- a decision affecting access to a significant service or support.
In light of the AI policy and the Privacy Act amendments, Australian Government agencies are recommended to:
- understand which applications have AI models integrated or embedded;
- what types of data is being used to train the AI models, and whether any personal information is being used;
- consider replacing personal information with de-identified information where possible;
- understand the difference between approved AI services (e.g. Microsoft Enterprise suite) vs unapproved AI services, and implement technical controls in place to block unapproved AI services (e.g. DeepSeek, AI extensions for browsers);
- consider ethical and responsible use of AI, and evaluate whether are any privacy risks rising from using the AI models. Perform a Privacy Impact Assessment if needed;
- evaluate the need to update the agency’s Privacy Policy in preparation for the Privacy Act updates;
- ongoing review and monitoring of the use of data to ensure compliance with the government requirements and the Privacy Act.
The privacy team at Griffin Legal can assist you in understanding the new requirements for AI compliance and understanding how to manage additional privacy risks arising from using AI services. Please feel free to contact our privacy team for tailored advice.