The Australian Human Rights Commission (AHRC) recommended in the Human Rights and Technology Report (’the report’) that an independent statutory authority—an AI Safety Commissioner—be established in Australia to support government agencies and the private sector to ensure artificial intelligence-informed decision making aligns with human rights standards . The AHRC refers to ’participation’ in decision making of stakeholders affected by new technologies as a core principle of a human rights approach, but there is little guidance in the report on how this will be achieved through the functions of an AI Safety Commissioner. In this presentation we will discuss the processes used by the Australian Law Reform Commission (ALRC) to enable interested stakeholders to ’participate’ in developing proposals for law reform. Law reform is ‘the technique of conceptualising [a legal] problem, bringing in social data and seeing the legal problem in its social context’ [2, pg.17]. The process of law reform in Australia is categorised by a commitment to consultation with legal and non-legal parties, enabling diverse views to inform development of the law. This requires law reform bodies to present existing laws and their effects, and to describe alternatives and envision their potential effects, in a manner and language that enables the participation of non-legal parties. As the social impact of algorithms increases, the processes used by the ALRC provide guidance on the trade-offs an AI Safety Commissioner will need to consider when designing processes to enable participation of people affected by algorithms in review processes.
 Sophie Farthing, John Howell, Katerina Lecchi, Zoe Paleologos, Phoebe Saintilan, and Edward
Santow. 2021. Human Rights and Technology Final Report. Technical Report. Australian
Human Rights Commission, Sydney
 The Hon Michael Kirby AC CMG. 2015. Forty Years on — Lessons of the ALRC. In Remarks
at the ALRC 40th Anniversary Celebration. Federal Court of Australia, Sydney.