To change location

  • alSobh
  • alChourouq
  • alDohr
  • alAsr
  • alMaghrib
  • alIchae

Follow Us on Facebook

The Promise and Pitfalls of AI-Powered Public Services: Governments Tread Cautiously

Tuesday 09 July 2024 - 12:30
The Promise and Pitfalls of AI-Powered Public Services: Governments Tread Cautiously

As the world marvels at the capabilities of generative AI like ChatGPT, governments around the globe are exploring the potential of these cutting-edge technologies to revolutionize public services. However, amidst the excitement, a cautious approach prevails as officials grapple with the inherent risks and limitations of relying too heavily on AI-powered systems for crucial tasks.

In the wake of ChatGPT's meteoric rise, visions of more efficient and accessible public services have captured the imagination of policymakers. Colin van Noordt, a researcher on the use of AI in government based in the Netherlands, notes that while early chatbots "tended to be simpler, with limited conversational abilities," the emergence of generative AI has revived dreams of human-like advisors capable of working around the clock, addressing inquiries on everything from benefits to taxes.

The allure of generative AI lies in its ability to provide human-like responses, and if trained on sufficient high-quality data, it could theoretically handle a wide range of questions about government services. However, the technology's propensity for making mistakes or generating nonsensical answers—a phenomenon known as "hallucinations"—has raised significant concerns.

In the United Kingdom, the Government Digital Service (GDS) has conducted trials with a ChatGPT-based chatbot called GOV.UK Chat, designed to answer citizens' questions on government services. While nearly 70% of trial participants found the responses useful, the agency noted instances of the system generating incorrect information and presenting it as fact, raising concerns about misplaced confidence in a system that could be wrong at times.

Other countries, such as Portugal, have also ventured into the realm of generative AI for public services. The Justice Practical Guide, a chatbot developed with European Union funding, aims to answer basic questions on subjects like marriage, divorce, and business registration. While the chatbot, based on OpenAI's GPT 4.0 language model, has demonstrated proficiency in handling straightforward queries, it has struggled with more complex scenarios, prompting officials to acknowledge the need for increased trustworthiness and confidence levels.

The flaws and limitations of generative AI have led many experts to advise caution in its deployment for public services. Colin van Noordt suggests that chatbots should be seen as "an additional service, a quick way to find information," rather than a means to replace human personnel and reduce costs.

Sven Nyholm, a professor of AI ethics at Munich's Ludwig Maximilians University, highlights the issue of accountability, arguing that "a chatbot is not interchangeable with a civil servant." He emphasizes that public administration requires accountability and moral responsibility, which AI systems cannot provide.

Beyond accountability, Nyholm also raises concerns about reliability, noting that while newer chatbots create an illusion of intelligence and creativity, they are prone to making "silly and stupid mistakes" that can be humorous in some contexts but potentially dangerous if relied upon for critical decision-making.

As governments grapple with the complexities of integrating generative AI into public services, some are exploring alternative approaches. Estonia, a trailblazer in digital governance, is developing a suite of chatbots for state services under the name Bürokratt. However, rather than relying on Large Language Models (LLMs) like ChatGPT, Estonia's chatbots utilize Natural Language Processing (NLP) algorithms.

NLP algorithms break down requests into smaller segments, identify keywords, and infer the user's intent. This approach, while more limited in its ability to imitate human speech and detect nuanced language, offers greater control and transparency, reducing the likelihood of providing incorrect or misleading answers.

"If Bürokratt does not know the answer, the chat will be handed over to a customer support agent, who will take over the chat and will answer manually," explains Kai Kallas, head of the Personal Services Department at Estonia's Information System Authority.

While LLM-based chatbots often provide more conversational quality and nuanced answers, Colin van Noordt acknowledges that this comes at the cost of less control over the system and the potential for providing different answers to the same question.

As governments navigate the uncharted waters of AI-powered public services, a delicate balance must be struck between embracing innovation and mitigating risks. The path forward may lie in a judicious combination of human expertise and technological assistance, where AI serves as a supplementary tool rather than a complete replacement for human judgment and accountability.

The journey towards AI-enhanced public services will undoubtedly be fraught with challenges, but by proceeding with caution and a commitment to ethical principles, governments can harness the transformative potential of these technologies while safeguarding the trust and well-being of their citizens.


Lire aussi