Extract from ABC News
The federal government has introduced its plan to respond to the rapid rise in use of artificial intelligence (AI) technologies, which will impose hard rules on the highest risk technologies, while minimising interventions in low risk AI to allow its growth to continue.
Key points:
- The government will introduce a risk-based system to protect against the worst potential harms of AI
- Risk technologies will have mandatory rules applied to them, including possible independent assessments and audits
- The government will avoid impeding the growth of low risk AI, largely focusing on voluntary standards
AI has the potential to add hundreds of billions to the Australian economy, improve pay packets and worker wellbeing, but there is low public trust in the AI technologies being designed, and the government received widespread concern in its consultations about risks to jobs, discrimination, and other social harms.
An International Monetary Fund study released this week found AI was poised to impact about 60 per cent of all jobs in advanced economies — with about half of those likely to benefit from AI boosting productivity, while the other half would be negatively impacted.
Industry Minister Ed Husic on Wednesday laid out the government's initial response, committing to a "risk-based" approach that would be able to respond to AI technologies even as the landscape continues to shift.
Mandatory rules for risky tech
Under the government's proposal, mandatory "safeguards" would be applied to high risk AI, such as self-driving vehicle software, tools that predict the likelihood of someone reoffending, or that sift through job applications for an ideal candidate.
High risk AI could require independent testing before and after release, ongoing audits and mandatory labelling where AI has been used.
Dedicated roles within organisations using high risk AI could also be mandated, to ensure someone is made responsible for ensuring AI is used safely.
The government will also begin work with industry on a possible voluntary AI content label, including introducing "watermarks" to help AI content be identified by other software, such as anti-cheating tools used by universities.
The risk-based approach will also allow government to stay out of the way of innovation in the sector, so that Australia can make the most of new technologies.
AI is already covered under privacy, copyright, competition and other laws, but the government said it was clear existing laws did not adequately prevent harms from AI before they occur.
Mr Husic said the government was listening to the concerns of Australians.
“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI," Mr Husic said.
“These immediate steps will start building the trust and transparency in AI that Australians expect."
An expert advisory committee will be established to guide the development of mandatory rules for high risk AI, as the government consults on details to prepare legislation.
The government remains open to whether to amend existing laws or introduce an EU-style "AI Act".
The government's response noted other jurisdictions were moving to ban some of the highest risk technologies, such as real-time facial recognition technologies used in law enforcement, but did not comment on whether Australia would ultimately follow that path.
It also identified "frontier" AI models such as ChatGPT, which were greatly more powerful than previous generations of AI, may require targeted attention, since they were developing at a speed and scale that could outpace existing legislative frameworks.
No comments:
Post a Comment