By Mohamed Hamad Al Kuwaiti*
In recent years, the world has witnessed a remarkable increase in the capabilities of artificial intelligence and the development of models capable of simulating human behaviour with unprecedented accuracy.
While these advancements have had a positive impact on various sectors, such as healthcare, education, and industry, they have also introduced serious challenges, most notably the exploitation of such technologies to carry out sophisticated fraud schemes that are difficult to detect using traditional methods.
Today’s fraudsters no longer rely on simple deception or fake emails; instead, they employ advanced tools that anyone can access to execute professional-grade attacks that can mislead both individuals and institutions.
Among the most advanced methods that have emerged recently is the use of deepfake technology, which enables the creation of highly realistic images, videos, and audio recordings. Some fraudsters have used this technology to produce audio clips of senior figures within companies to instruct employees to transfer funds or share sensitive information. The challenge lies in the increasing difficulty of distinguishing between authentic and fabricated content, particularly as the models generating such materials continue to improve, creating a significant obstacle for cybersecurity professionals and for the public, who may fall victim to convincing fabrications.
AI-powered text bots have also enabled new forms of fraud, capable of writing highly persuasive messages in Arabic or any other language. Fraudsters no longer need to be experts in social engineering or persuasion; intelligent models can generate human-like, carefully crafted messages that target victims’ vulnerabilities. Some of these messages leverage personal information extracted from social media to make the deception more believable—for example, by implying familiarity with the victim’s professional life or daily activities.
Beyond the sophistication of these techniques, the greatest challenge lies in AI’s ability to learn from attempts to detect it. Some fraudulent tools are continuously trained to avoid patterns recognised by cybersecurity systems, making cybercrime smarter and harder to anticipate.
With generative AI now widely accessible, the pool of fraudsters has expanded to include individuals with no technical background who simply exploit ready-made tools to execute advanced attacks.
To counter this escalation, companies and governments have begun investing in defensive tools powered by AI, such as deepfake detection systems and behavioural analytics engines that monitor unusual activity within digital environments.
Organisations are also intensifying employee training, updating cybersecurity policies, and strengthening legislation to penalise the creation or dissemination of AI-generated fraudulent content. However, these efforts remain insufficient unless accompanied by individual awareness that protects people from falling victim to such threats.
Individuals must exercise a high level of caution when receiving messages or calls requesting sensitive information or urging immediate financial actions — even if the voice or text appears to come from a trusted source. Verification through a separate channel, such as directly contacting the person or institution, is essential before responding. It is also advisable to avoid opening suspicious links or downloading untrusted files, and to rely on regularly updated security software. Monitoring bank accounts and email activity can also help detect unusual behaviour early.
Limiting the sharing of personal information online is equally important, as fraudsters often use such data to craft convincing messages. Enabling two-factor authentication on important accounts significantly reduces the risk of compromise.
If an individual receives an audio or video message claiming to be from a relative, supervisor, or bank employee, they should refrain from taking any action until its authenticity is verified.
Finally, staying informed about emerging fraud techniques remains one of the most effective forms of protection, as knowledge is the first line of defence in an era where artificial intelligence serves both as a tool for innovation and a weapon in the hands of fraudsters.
*The writer is the Head of Cyber Security for the UAE Government