KHALED AL KHAWALDEH (Abu Dhabi)
Joao Alberto De Lima, an esteemed Brazilian researcher in the AI space, gave a special talk at the Aletihad Forum on Monday, where he demonstrated how common syntax and semantic errors made by AI language models could be mitigated.
Speaking as part of a session hosted by Aletihad English Managing Editor, Mohammad Ghazal, and several other esteemed guests, De Lima, made a special plea to media professionals who he felt had a responsibility to try and reduce the mistakes and hullicinations made by AI.
AI hallucination refers to instances when a language model generates information that is false, misleading, or nonsensical. This can happen when the AI makes incorrect associations or fabricates details that aren’t based on real data or facts.
As part of his talk, De Lima, showed off a language model that was trained on 70 articles from the Aletihad English website. By increasingly modifying the way data was retrieved, and esnuring there were more complex vectors linking information, De lima argued that the common mistakes of AI could be avoided and answers could be more consistent.
“I asked several different LLM the same question about what water sport was most popular in the UAE and had impacted coastal development the most… and got three different answers,” De Lima told the audience.
De Lima believed that due to the enormous, often broad-based data retrieval of AI models, the source material could often be tainted and therefore give skewed or incorrect answers. He said by using methods that he had implemented in Brazil, there was a chance for updated models that were more focused and verified to emerge.
Using his model built on Aletihad articles, De Lima was able to show how answers were more accurate and consistent thanks to the more controlled source of truth.
“A proper solution to reduce AI hallucination is the modified retrieval data generation method introduced based on our national TV experience. These are safest use of AI,” he said.