51% of AI-Generated News answers have problems. Gemini is very wrong chatbat – computers

BBC AI has developed an investigation into its news representation and recognizes it 51% of practice responses have significant problems.
The fence 91% of answers have at least some problems, 19% of the AI-Generated answers cited BBC contentIncluding wrong numbers and dates, and 13% of the material cited is changed or not in the original resources BBC.
“These bugs incite people misrepresent and distort their understanding on complex issues”Read in the study.
In the chattbots studied, the Google Gemini has the highest error rate.
There is no case from Microsoft to IA, Copilot, incorrectly mentioned in 27% of cases, chats at 15% in 17% and at 15%.
In many cases, AI Assistants have selected BBC articles, displaying incomplete articles that have been distorted by the original, so “These bugs have implications in the real world because people may make decisions based on incorrect information”.
In this context, the BBC proposes three basic steps for the future and AI companies should cooperate with the editors to ensure the exact representation of the news content.
May also require control to maintain the integrity of the media.
The study depends on the analysis of 100 news -related questions and assessed the answers to AI aides during the investigation period, and the accuracy of the BBC content, the accuracy of the BBC content, the appointment of the sources, the editorial, the editorial, the context and representation of the BBC content.