Building Ethical Machines in Social Services: Examining, Evaluating, Building Fairness and Explainability in ADM 

December 2021
ARC Centres of Excellence

A significant area of automated decision-making (ADM) in social services relates to the use of predictive measures – such as predictions of risk to children to abuse/neglect in child protection, predictions of recidivism or crime in policing and criminal justice, predictions of welfare/tax fraud in compliance systems, predictions of long term unemployment in employment services. While earlier and current versions of these systems are based on standard statistical analyses, they are increasingly having machine learning developed and deployed.

Despite these changes in the machine/algorithm design, the issues of bias, fairness and explainability are not substantially shifted and have not been dealt with in the past. Working with computer scientists, lawyers, social scientists, and users of social services, this project will engage with substantive empirical examples of ADM in disability services, child protection, criminal justice and social security to develop an understanding of what social service users and professionals regard as fairness and explanation.

Project members

Professor Paul Henman

Professor
School of Social Science
Chief Investigator and Node Leader
ADM+S Centre