“And, frankly, that may not have any intent with so-called ‘robo-debt’ but that was the effect. We’re using a new form of automated debt recovery on some of the most vulnerable people in Australia, people who receive Centrelink benefits.”
He said the use of the technologies made the decisions more opaque and that made it harder for vulnerable people to understand and challenge any findings that might be unlawful or unfair.
“If the decision was opaque, you’re left with that unsettling feeling there’s nothing really you can do with it, you can’t find out whether in fact the decision was lawful or unlawful,” he said.
Polling commissioned by the Human Rights Commission found 54 per cent of respondents were aware government agencies were using computers to make automated decisions. The least aware were low-income earners, on 45 per cent.
The Essential poll conducted in late July found two-thirds of participants – 68 per cent – thought it was very important they could appeal a decision. And 59 per cent said they should be informed if a computer program had made the decision.
Between 41 and 48 per cent of respondents said they would have a lot more trust in the decisions if a range of oversight measures were put in place, including human checks, limits on data sharing with government and stronger legal protections of their rights.
Mr Santow, who is leading the commission’s human rights and technology project, said people in the oversight process had to be properly equipped to understand and dispute automated decisions, not just “tick and flick”.
He said governments and companies were increasingly looking to artificial intelligence to solve problems even though it did not yet have the necessary capacity. Mr Santow labelled the push “really dangerous”.
“The risk is you have an AI decision-making system that gives a veneer of technological accuracy when, in fact, it may be no more accurate. Indeed, it may be less accurate than the conventional way of making the decision,” he said.
The commission has called for existing laws, including those around discrimination, to be applied consistently to the use of emerging AI technology and says there are also gaps that need to be addressed with new legislation if people are to be protected from the growing risks.
The government has been forced to repay $721 million to 373,000 Australians because of the “robo-debt” scandal. The people were hit with debt notices for alleged overpayments based on flawed calculations. Under the scheme, Centrelink staff were removed from the data-matching process across government agencies.
Fergus Hunter is an education and communications reporter for The Sydney Morning Herald and The Age.