The use of artificial intelligence (AI) and other automated decision-making tools in recruitment is increasing in Australian organizations. However, research shows that these tools can be unreliable and discriminatory, and in some cases rely on discredited science.
Currently, Australia has no specific laws to regulate how these tools work or how organizations can use them.
The closest thing we have is new guidelines for public sector employers, issued by the Merit Protection Commissioner after overturning several automated promotion decisions.
A first step
The Commissioner reviews promotion decisions in the Australian public sector to ensure that they are legal, fair and reasonable. In the 2021-22 financial year, Commissioner Linda Waugh reversed 11 promotion decisions made by government agency Services Australia in a single recruitment round.
These decisions were made using a new automated process that required applicants to go through a sequence of AI assessments, including psychometric tests, questionnaires and self-recorded video responses. The commissioner found that this process, which involved no human decision-making or review, resulted in deserving candidates losing promotions.
The Commissioner has just published Guidance material for Australian government departments on how to choose and use AI recruitment tools.
These are the first official guidelines given to employers in Australia. He cautions that not all AI recruiting tools on the market here have been thoroughly tested, nor are they completely unbiased.
Risky and unregulated AI recruiting tools
AI tools are used to automate or help recruiters in the search, selection and onboarding of candidates. By one estimate, more than 250 commercial AI recruitment tools are available in Australia, including CV screening and video assessment.
A recent survey by researchers from Monash University and the Diversity Council of Australia found that one in three Australian organizations have recently used AI in recruitment.
Using AI recruiting tools is a “high risk” activity. By affecting employment decisions, these tools can impact the human rights of job seekers and risk excluding disadvantaged groups from employment opportunities.
Australia does not have specific legislation regulating the use of these tools. The Australian Department of Industry has published AI Ethical Principles, but these are not legally binding. Existing laws, such as privacy law and anti-discrimination legislation, need to be reformed urgently.
Unreliable and discriminatory?
AI recruiting tools involve new and developing technologies. They may not be reliable and there are well-known examples of discrimination against historically disadvantaged groups.
AI recruitment tools can discriminate against these groups when their members are absent from the datasets on which the AI is trained, or when discriminatory structures, practices, or attitudes are imparted to these tools in their development or deployment.
There is currently no standard test that identifies when an AI-based recruiting tool is discriminatory. Additionally, as these tools are often made outside of Australia, they are not suited to Australian law or demographics. For example, training datasets are very likely not to include First Nations peoples from Australia.
Lack of guarantees
AI recruitment tools used by and on behalf of employers in Australia lack adequate safeguards.
Risk and human rights impact assessments are not required prior to deployment. Monitoring and evaluation once used may not take place. Job seekers miss meaningful opportunities to provide feedback on their use.
Although the vendors of these tools may perform internal testing and audits, the results are often not publicly available. Independent external audit is rare.
Job seekers are at a significant disadvantage when employers use these tools. They can be invisible and inscrutable, and they change hiring practices in ways that are not well understood.
Job seekers have no legal right to be told when AI is being used to assess them in the hiring process. Nor is it necessary to give them an explanation of how an AI-based recruiting tool will assess them.
My research revealed that this is particularly problematic for job seekers with disabilities. For example, job seekers with low vision or limited manual dexterity may not know they will be assessed on the speed of their responses until it is too late.
Job seekers in Australia also lack the protection enjoyed by their counterparts in the European Union, who have the right not to be subject to a fully automated recruitment decision.
Of particular concern is the use of video assessment tools, such as those used by Services Australia. Many of these AI tools rely on facial analysis, which uses facial features and movements to infer behavioral, emotional, and character traits.
This type of analysis has been scientifically discredited. A leading provider, HireVue, was forced to stop using facial analysis in its AI tool following an official complaint in the United States.
The example of Services Australia highlights the urgent need for a regulatory response. The Australian government is currently conducting a consultation on the regulation of AI and automated decision-making.
We can hope that new regulations will solve the many problems related to the use of AI tools in recruitment. Until legal protections are in place, it may be best to suspend the use of these tools to screen job seekers.
Provided by The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Quote: Will AI decide if you get your next job? Without Legal Regulation, You May Never Even Know (December 12, 2022) Retrieved December 12, 2022 from https://phys.org/news/2022-12-ai-job-legal.html
This document is subject to copyright. Except for fair use for purposes of private study or research, no part may be reproduced without written permission. The content is provided for information only.
#decide #job #legal #regulation