Children at risk and the ethics of predictive risk assessment

The Social Development Ministry is stopping short of implementing a predictive risk assessment tool that can identify children at risk of abuse. MSD commissioned Auckland economist Professor Rhema Vaithianathan to develop the model which uses data about children and their families to identify those at risk of physical, sexual or emotional abuse before the age of two. Professor Vaithianathan says MSD has decided against implementing the tool in the way it was intended, and says it is unethical. Dorothy Adams is Acting Deputy CEO of Organisational Solutions for the Ministry of Social Development.

See also Emily Keddell’s article on The ethics of predictive risk modelling in the Aotearoa/New Zealand child welfare context: Child abuse prevention or neo-liberal tool?

3 thoughts on “Children at risk and the ethics of predictive risk assessment

    1. Thanks for your comment. First, he mihi aroha atu ki te whanau o Moko – before I talk about him I just want to acknowledge his life and his whanau. Our thoughts are with you.

      Like everyone, when I read about his terrible terrible treatment I’m looking for the thing that could have saved him – who knew, and who knew what, and when, being crucial questions in my mind. I think we all hope and wish there was a way someone could have predicted what was going to happen to him, and stop it. It’s a very normal human response to tragic events – it helps us feel that it is preventable, controllable, and that we could stop it happening to someone else. But several factors make prediction – whether by human or machine – in cases such as this very difficult. For human prediction, few risk factors and lack of knowledge about some crucial information (who lived at the house) meant a fuller assessment was not warranted at he time. For machine prediction (i.e. a PRM), child deaths are such rare events, they are difficult to predict statistically at all, and therefore difficult to build predictive models around.

      Crucially for your question, is could predictive risk modelling have done any better than human judgement at assessing him as at least high risk for abuse? Sadly, the answer is probably no. I say probably because without knowing every background factor, it’s difficult to say for sure, but in this case there are several reasons he is unlikely to have been defined as ‘high risk’ by a PRM in such a way that intervention was possible. Firstly, whatever risk score his family would have been given, he was in an informal care arrangement at the time of his death, so it would be the risk score of his ‘caregivers’ that would be pertinent. No-one knew of this arrangement until late in the piece. Secondly, no-one even knew one of the adults was residing at the address it seems, so whatever his ‘risk score’ would be via a PRM, it would not have been connected with Moko. Finally, the contact between the other adult in the home and CYF re the notification of Moko’s mother suggests she was considered a ‘safe’ carer, and therefore it’s unlikely her risk score via a PRM would have been high either. In fact, it may have been misleadingly low. Of course I wish either machine or human could have predicted it. But in this case, not likely.

Leave a Reply

Your email address will not be published. Required fields are marked *