Australia is enacting strong regulations to guarantee the security and responsible application of AI technology, a bold response to the growing problems related to the abuse of biometric data and the control of high-risk AI systems. The Australian government is avoiding a laissez-faire approach to AI legislation, as highlighted by Science and Industry Minister Ed Husic, who also highlighted the possible risks associated with abusing individuals’ biometric data.
Setting up guardrails for high-risk AI
Under the guidance of Science and Industry Minister Ed Husic, the Australian government is taking significant steps to regulate high-risk AI systems that have implications in critical sectors such as law enforcement, healthcare, and recruitment. This initiative is part of broader efforts to mitigate the risks associated with AI technologies, including social scoring and AI-driven manipulation with malicious intents.
An AI Expert Group was launched earlier this month to advise the Department of Industry, Science and Resources on implementing these guardrails. Concurrently, the National AI Centre is collaborating with industry stakeholders to develop a voluntary AI safety standard. This standard aims to introduce labeling and watermarking of AI-generated content, providing transparency and accountability in the deployment of AI technologies.
The government’s proactive stance follows a “Safe and Responsible AI in Australia” consultation, which garnered over 500 submissions from industry participants and individuals alike. The overwhelming response underscored the public’s significant concern over the risks posed by AI, urging the government to establish clear and effective regulatory measures.
National assurance framework and digital transformation
As part of its comprehensive strategy to govern the application of AI, the Australian government is also focusing on digital identity and the secure, ethical use of digital technologies. A national assurance framework for AI, inspired by New South Wales’ AI assurance framework established in 2021, is being developed to standardize the approach to AI across all levels of government. This framework is designed to ensure that AI projects, especially those classified as high-priority and high-risk, undergo rigorous review and approval processes.
The Digital Transformation Agency, alongside digital and data ministers, has outlined priorities for 2024 that include the development of a National Digital Identity and Verifiable Credentials Strategy. This strategy aims to shape the future of Australia’s national digital identity and enhance digital services for citizens. Efforts will also be made to update the National Identity Proofing Guidelines and improve public education on identity security.
These measures reflect Australia’s commitment to fostering innovation in AI and digital technologies while ensuring that such advancements do not compromise the privacy and security of its citizens. By establishing a balanced regulatory framework, the government aims to protect individuals from the potential misuse of AI and digital identities, setting a precedent for responsible and ethical technology use on a global scale.