DTA creates policy for public service AI use ARN

https://ift.tt/36n0QXU

The Digital Transformation Agency (DTA) has released a policy for the responsible use of AI in government, which will come into effect from September.

The policy aims to help the Australian government create a coordinated approach to the opportunities for artificial intelligence (AI) use and was designed to strengthen, not duplicate, frameworks, legislation and practices that touch upon the government’s use of AI.

According to the DTA, the policy is designed to evolve with technology and community expectations. The Agency claims the policy sets out how the Australian Public Service (APS) will take up the benefits of AI safely and responsibly, improve transparency, governance and risk assurance and adapt and change relevant technology and policy over time.

“Engaging with AI in a safe, ethical and responsible way is how we will meet community expectations and build public trust,” said DTA general manager for strategy, planning and performance Lucy Poole.

Consistent with other whole-of-government digital policies, all non-corporate Commonwealth entities (NCEs), as defined by the Public Governance, Performance and Accountability Act 2013, must apply this policy. Corporate Commonwealth entities are also encouraged to apply this policy.

The principles of the policy include safely engaging with AI for the benefit of Australians, explaining, justifying and taking ownership of advice and decisions when utilising AI, having clear accountabilities for the adoption of AI and building AI capability for the long term.

To help implement the policy, the DTA has published a standard for accountable officials (AOs) to lead its agency to uplift its governance of AI adoption, embed a culture that fairly balances risk management and innovation, develop its response and adaptation to AI policy changes and be involved in cross-government coordination and collaboration.

“We’re encouraging AOs to be the primary point of partnership and cooperation inside their agency and between others,” Poole said. “They connect the appropriate internal areas to responsibilities under the policy, collect information and drive agency participation in cross-government activities.”

The DTA is also set to release a standard for AI transparency statements, setting out the information agencies should make publicly available, such as intentions for using AI, areas where there may be direct public interaction without a human intermediary, how AI systems will be monitored for their effectiveness, compliance with legislation and regulation and how the public will be protected against negative impacts.

“Statements must use clear, plain language and avoid technical jargon,” Poole added.