Automated resume-scanning systems
have been found to discriminate against African-American names, graduates of women’s colleges, and even the word “women�? in a job application.
Credit-scoring AI that can cut people off from public benefits such as health care, unemployment and child support has been found to penalise low-income individuals.
Misplaced trust in algorithms lay at honduras telemarketing database the heart of Australia’s Robodebt debacle in which the assumption of a regular week-to-week wage packet was baked into the system.
Read more:
From robodebt to racism: what can go wrong when governments let algorithms make the decisions
Human systems have checks
and balances and higher authorities that can be appealed to when there is an apparent error. Algorithmic decisions often do not.
In our research forthcoming in the journal Organization my colleagues and I found that this lack of a right of appeal, or even a pathway to appeal, reinforces forms of power and control in workplaces.
Now what?
So AI, an influential tool of the world’s largest corporations, appears to systematically disadvantage minorities and economically marginalised people. What can be done?
The protest initiated and led by Google’s own employees may yet bring about change inside the company. Internal discontent at the online giant did get results two years ago, when protest over the kid-glove treatment of executives facing complaints of sexual misconduct led to a change in the company’s policy.
The Google walkout is a watershed moment in 21st century labour activism
Outsiders are also beginning to take more of an interest. The European Union’s General Data Protection Regulation (GDPR), which has boosted privacy standards since 2018, taught regulators around the world that the black box of algorithmic decision-making can indeed be prised open.
The G7 group of leading economies recently set up a Global Partnership on Artificial Intelligence to drive discussion around regulatory solutions to these problems, but it is still in its infancy.

As an industrial relations issue
, the use of AI in hiring and management needs to be brought into the scope of collective bargaining agreements. Current workplace grievance procedures may allow human decisions to be appealed to a higher authority, but will be inadequate when the decisions are not made by humans – and people in authority may not even know how the AI arrived at its conclusions.
Until internal protests or outside intervention start to impact on the way AI is designed, we will continue to rely on self-regulation. Given the events of the past week, this may not inspire a great deal of confidence.The Conversation
Michael Walker, Adjunct Fellow, Macquarie University
This article is republished from The Conversation under a Creative Commons license. Read the original article.