Using AI recruitment tools


General protections from unlawful discrimination under the Fair Work Act 2009 (Cth) apply to both employees and prospective employees. It is important to ensure that reliance on tools such as AI in recruitment processes do not infringe on the protections from discrimination that are afforded to prospective employees. The list of attributes that are protected include race, colour, sex, sexual orientation, age, religion, national extraction and much more.

AI tools are known to contain some biases, which should be proactively managed to avoid falling foul of the unlawful discrimination provisions of the Fair Work Act.

AI tools are now widely used to assist employers and recruitment companies to:

  • find candidates;
  • screen candidates;
  • analyse resumes;
  • conduct pre-employment assessment;
  • consider cultural fit;
  • answer candidates questions via Chatbots; and
  • interview candidates.

The tools can be fantastic at locating potential candidates that might otherwise not apply for the job and can helpfully sift through large volumes of information to quickly identify potential suitable candidates. While this may streamline the recruitment process, there are risks to be managed when AI is used in these processes.

For example, in 2014, tech giant Amazon introduced AI into its recruitment processes. It used an algorithm that had been trained on CVs submitted in the previous ten years, these CVs were mostly from men, and resulted in the algorithm having a strong bias towards rating men as more suitable applicants thereby discriminating against female candidates.

In Australia, this inadvertent discrimination should be avoided, as a prospective employee who has been discriminated against in this way could claim, under the Fair Work Act an injunction, reinstatement, and compensation order. Employers may also receive a financial civil penalty for a breach. The reputational damage an employer may suffer as a result of adverse publicity, should also be an incentive to pay close attention to this issue.

Employers can do a number things to manage the discrimination risks of AI including:

  • checking if the AI tool has been properly tested, it is effective and that the risks of its use are known;
  • remembering AI tools are not unbiased;
  • an individual person is responsible for the final decisions made by the AI tools used by an employer;
  • ensure privacy obligations are addressed before using an AI tool;
  • educate staff on the use and risks of AI; and
  • not screening and excluding candidates purely on the basis of an AI assessment.

Having an AI policy in place to regulate the use of AI in your organisation can assist in managing these risks. Contact Griffin Legal for assistance in preparing your AI use policy.

We also remind employers that if the Privacy Act 1988 (Cth) applies to your organisation, employers should ensure that an applicant’s personal information is treated in accordance with the Australian Privacy Principles to avoid breaching the Privacy Act 1988 (Cth). This includes use of applicant personal information by AI tools.

Parental Leave for Casual Employees

For casual employees the unpredictability of their employment can be a major source of stress as often casual employees miss out on many of the entitlements that full-time and part-time employees enjoy. For many, this concern is further exacerbated when they learn that they are about to become a parent. It should therefore be of …
Read more

Purchasing an Off-the-Plan Property

The interest in “off-the-plan” properties is ever increasing and is becoming more popular for buyers. An off-the-plan purchase is one where the Buyer enters into a contract to purchase a property that has not yet been constructed. Due to the prolonged settlement period for an off-the-plan purchase it is imperative for buyers and sellers to …
Read more