Should governments really use AI to reintroduce the state?

Should governments really use AI to reintroduce the state?

Trump -Administration wants to streamline the US government using AI to increase the effectiveness

Greggory Disalvo/Getty Images

What is artificial intelligence? It’s a question that scientists have been struggling with since dawn for computing in the 1950s when Alan Turing asked, “Can machines think?” Now that large language models (LLMs) like Chatgpt have been detached in the world, finding an answer has never been more urgent.

While their use has already become widespread, the social norms around these new AI tools are still developing rapidly. Should students use them to write essays? Will they replace your therapist? And can they turbocharger the government?

The last question is asked in both the US and the UK. According to the new Trump administration, Elon Musk’s Department of Government Efficiency Eliminates Federal Workers and Rolls a Chatbot, GSAI, to those left. Meanwhile, British Prime Minister Keir Stormmer has called AI a “golden opportunity” that can help reshape the state.

There is a government work that can benefit from automation, but is LLMs the right tool for the job? Part of the problem is that we still can’t agree on what they actually are. This was appropriately demonstrated this week when when New scientist Used Freedom Information (Faith) Laws to obtain chatgpt interactions between Peter Kyle, Britain’s Secretary of State, Innovation and Technology. Politicians, data protection experts and journalists – not least us – were stunned that this request was grants, made similar requests for a minister’s google -search story, for example, would generally be rejected.

The fact that the posts were released suggests that the British government sees by using Chatgpt as more akin to a ministerial conversation with officials via E -Mail or WhatsApp, both of which are subject to faith legislation. Kyle’s interactions with Chatgpt Don indicate antrg dependence of AI to form seriously police – one of his questions was all nevertheless suggests the fact that faith requested that some in the government appear to believe that AI can be spoken as a human being related.

SEAM New scientist Has expanded reported, current LLMs are intelligent in every meaningful sense and are equally readable to spraying convincing-upscaping inaccuracies ASY is to offer useful advice. In addition, their answers will also reflect the inherent bias of the information they have intervened.

In fact, many AI researchers are increasingly of the opinion that LLMs are not a way to the high goal of artificial general intelligence (AG) capable of matching or exceeding everything a human being can do – a machine that might think, as Turing would have. For example, approx. 76 per Hundreds of respondents in a rebuilt study of AI scientists that it was “unlikely” or “very unlikely” that the current approaches will be successful in achieving action.

Instead, we may have to think of these AIS in a new way. Writing in the journal Science This week, a team of AI scientists say they should “not be seen primary as intelligent agents age, as a new kind of cultural and social technology that allows people to take advantage of information that other people have accumulated”. The researchers compare LLMs with “such previous technologies as writing, printing, markets, bureaucracies and representatives democracies” that have transformed the way we gain access to and process information.

Framed In this way, the answers to many questions become clearer. Can Governments use LLMs to increase efficiencycy? Almost certainly, but only when used by people who understand their strengths and limitations. Should interactions with chatbots be subject to the legislation of freedom for information? Possible, but existing cuts designed to give ministries a “safe space” for internal consideration should apply. And can, asked Turing, think machines? No. Not yet.

Topics:

Leave a Reply

Your email address will not be published. Required fields are marked *