Extract from ABC News
Elon Musk has dismissed the resignation of dozens of DOGE staffers who cited security concerns saying they were "Dem political holdovers". (AP: Alex Brandon)
In short:
About 21 civil servants from Elon Musk's Department of Government Efficiency have resigned saying they will not use their skills to jeopardise the sensitive data of Americans.
It comes as US media has reported DOGE was using AI to determine which government workers were critical.
Mr Musk has described the mass resignation from his department as "fake news", and that staff who quit "would have been fired had they not resigned."
What Elon’s DOGE cuts are really about | If You're Listening
Dr Hart was concerned that civil service was being politicised, saying an ideologue might not necessarily match the public good.
"A lot of government depends not on politicians making decisions [but] a trained qualified expert public service carrying out those decisions, implementing those decisions,"he said.
"That is a problem when you introduce what seems to be, in the case of Elon Musk, a bunch of young, fairly inexperienced individuals that have been recruited from it seems his own organisations with no training in government."
The civil service played an important role in any democratic government, and members of the bureaucracy needed to have a certain degree of knowledge and expertise to roll-out policy, Dr Hart added.
"That doesn't seem to exist at the moment," he said.
AI reportedly used to find which government jobs are critical
Dr Jonathan Kummerfeld from the University of Sydney specialises in Natural Language Processing with a focus on systems for collaboration between people and AI models.
He weighed in on the MSNBC report that civil servants' responses to the "what did you do last week" email would be fed into an AI system to determine if someone's work was mission critical.
"The biggest risk is basically it's going to make mistakes, and those mistakes partly come from biases in the model that are hard to pin down and hard to understand,"Dr Kummerfield said.
"So when they give whatever information they're collecting to the model, they also give it a prompt saying, please make a decision, and they give it some information.
"The way you write that is going to really influence the decisions it makes."
There has been research published in the peer-reviewed journal Nature that shows biases within artificial intelligence have the potential to amplify our own biases.
"So for example, if one person writes who is a native English speaker, and someone else who writes who English is their second language," he said.
"Maybe they phrase things in a slightly unusual or different way, or they use words that are going to be interpreted differently, that could impact its decision unfairly."
There are real potential benefits with AI, Dr Kummerfield said, but it's about how the technology was applied, and testing was critical.
To mitigate those risks organisations from governments to private companies needed to think carefully, Dr Kummerfield said.
"So it's not automatically a bad idea, but if it's rushed into implementation and it's treated as a way of just replacing people and saving money, that has risks," he said.
Dr Kummerfield said it was that likely other organisations were trying to use AI in this way.
"So grappling with the challenges and the risks it presents is an important thing to do now, because there are a lot of them that will fly under the radar and we will only discover the consequences down the line," he said.
ABC/AP
No comments:
Post a Comment