Top AI assistants aren’t newsworthy, study finds

Leading AI Assistants Are Unsuitable News Assistants almost in their parts, according to a new study published on Wednesday by the European Broadcasting Union (EBU) and the BBC.
An international study studied 3,000 answers to questions about issues from artificial intelligence assistants – software programs that use natural language commands to complete user tasks.
It evaluated AI assistants in 14 languages for accuracy, traction and the ability to distinguish the facts of comparison, including openAi’s chatgpt, Microsoft’s copilot, Gemini’s Copilot, Gemini.
Overall, 45 percent of AI responses were studied with at least a significant problem, with 81 percent having some problem with the problem, the study showed.
Seven percent of all online news consumers and 15 percent of those under the age of 25 use AI assistants to get their news, according to a Reuters Institute news report.
Reuters has contacted the companies for comment on the findings.
Companies say they want to improve
Gemini, the assistant of Google Ai, explained earlier on its website that it accepts feedback to continue to improve the platform and make it useful for users.
Opelai and Microsoft have previously said that hallucinations – when an AI model produces incorrect or misleading information, usually due to factors such as insufficient data – is a problem they want to solve.
Delleclitity says on its website that one of its “deep search” methods has a 93.9 percent accuracy rate for spam.

AI assistants make some terrible mistakes
A third of AI Askents’ answers show serious errors such as missing, misleading or illegal, according to the study.
72 percent of responses to Gemini, Google’s AI assistant, had significant problems getting help, compared to 25 percent for all other assistants, it said.
Accuracy problems were found in 20 percent of the responses to all AI assistant studies, including outdated information, it said.
CBC / Radio-Canada, POPMedia, Metorand, Toronto Star, Toronto Star, e-mail, and canadas, and Canadas sent a joint chatgpt accai law, through the news program to train its artificial intelligence system. News organizations say Opelai Violates Copyright with ‘content’ from their websites.
Examples cited include Gemini noting changes in the law on the condemned sentences, and chatgpt reporting Pope Francis as the current pope after his death.
Twenty-two media organizations from 18 countries including, CBC and Radio-Canada, as well as others from France, Germany, Spain, Ukraine, Britain and the United States participated in the study.
With AI assistants also replacing traditional search engines for news, public trust could be undermined, the EBU said.
“When people don’t know what to trust, they end up trusting nothing, and that can hinder democratic participation,” said Jean Philip Philip Deenday, director of evangelical preferences in a statement.
The EBU report called on AI companies to improve how their AI assistants respond to news-related questions and be more accountable, addressing the example of how organizations have the power to identify, acknowledge and correct “mistakes.
“It’s important to make sure that the same accountability is with the AI assistant,” he said.