Monday 12 Jan 2026 Abu Dhabi UAE
Prayer Timing
Today's Edition
Today's Edition
AI

TRENDS symposium analyses role of AI in regulatory systems

TRENDS symposium analyses role of AI in regulatory systems (SUPPLIED)
19 Oct 2025 10:35

ABU DHABI (ALETIHAD)

TRENDS Research & Advisory, through its virtual office in Germany, organised a symposium titled The Role of AI in Security: Balancing Technological Limits and Human Responsibility.

The event was held on the sidelines of TRENDS’ participation in the 77th edition of the Frankfurt International Book Fair 2025, bringing together an elite group of experts and specialists in security, defense, intelligence, and counterterrorism fields.

Participants in the symposium, moderated by Hazza Saif Al-Hammadi, a researcher at TRENDS, emphasised that the world is facing a dual challenge arising from the rapid advancement of AI technologies and their integration into security infrastructures, regulatory systems, counterterrorism frameworks, intelligence analysis, and military defence. They noted that this interconnection increases the risks posed by AI to the global security landscape.

Reverse AI

The symposium began with a presentation by Dr. Mohammed Abdullah Al-Ali, CEO of TRENDS Research & Advisory, who stated that national and international security is rapidly transforming due to AI. He explained that AI is now employed in counterterrorism, cybersecurity, surveillance, crime prediction, border control, and the development of military strategies. Also, AI technologies allow for increased efficiency and early detection of threats.


Dr. Al-Ali pointed out that with the accelerating spread of AI systems, a range of new risks has emerged, including “reverse AI,” where innovative systems can be exploited or manipulated to deceive them. He warned that such manipulation could negatively affect emergency and security systems, endangering lives and communities. Moreover, technology can be used as an invisible weapon that threatens transparency and justice in emergency management.

Bias and Breach Risks

Dr. Al-Ali noted that despite the power and effectiveness of AI, it poses several risks, including algorithmic bias, misuse in surveillance, violations of privacy, and overreliance on automated systems. Without proper oversight, the use of AI could lead to misclassifications and breaches of international law.

He stressed that AI’s role should be to assist in making critical security decisions, not to replace human judgment. He added that no single country can confront the challenges of AI alone, as there is an urgent need for international cooperation, policy development, and the establishment of ethical standards to govern the use of AI in security.

Yan St-Pierre, CEO of the Modern Security Consulting Group MOSECON and Counterterrorism Advisor based in Berlin, emphasised that AI has become a central element in both combating terrorism and developing its methods. While AI provides advanced analytical and intelligence capabilities, it also carries significant risks of misuse. Its dual nature, he explained, makes it a tool that can either strengthen security or empower criminals, which requires continuous and responsible human oversight.

St-Pierre noted that extremist organisations exploit AI in three main areas: propaganda and recruitment, cyberattacks, and the planning of field operations. Such use allows these groups to spread rapidly, influence public opinion, and execute complex digital and physical attacks that often exceed the reach of traditional surveillance systems.

He explained that AI tools help terrorists carry out cyberattacks such as ransomware, phishing, and denial-of-service (DoS) attacks, and are also used to analyse infrastructures and collect data for planning field attacks. St-Pierre said that security agencies, in turn, employ AI to gather open-source intelligence, counter disinformation, analyse large datasets, track financial activities, and monitor extremist content.

The expert cautioned that despite these advancements, AI can never replace human understanding of security contexts. Overreliance on AI could widen the gap between terrorists and security agencies. Therefore, he stressed the urgent need for ethical and legal frameworks, along with transparency and accountability mechanisms, since final decisions must remain in human hands.

Strengthening Cyber Defence

Mustafa Al-Ammar, member of Germany’s Christian Democratic Union (CDU), stated that AI is a tool supporting security and the justice system, helping accelerate procedures and analyse data with accuracy and efficiency. However, he warned that AI also poses real dangers, most notably deepfakes and media disinformation, which can be used to destabilise societies and influence public opinion. He noted that technology can be a means of protection or a tool of destruction, depending on who uses it.

Al-Ammar stressed the need to build modern digital security systems that enable the fast and secure exchange of information among security entities at both national and European levels, while reinforcing cyber defense through intelligent systems capable of predicting threats. He cautioned that data protection should not turn into protection for criminals.

He called for achieving a balance between privacy and security requirements within a transparent and accountable framework, alongside unifying European standards in cyber defense and transforming Europol into an integrated European body utilising AI to confront emerging threats.

Human Judgment

Dr. Jassim Mohamed, Head of the European Centre for Counterterrorism and Intelligence Studies (ECCI), argued that AI has become a key instrument in intelligence work, used to analyse massive volumes of data and detect threats rapidly. However, he emphasised that the final decision must always remain in human hands, as AI does not understand ethical or moral contexts, making total reliance on it a danger.

He explained that among the most significant risks of AI are biases arising from unbalanced data, excessive trust in automated results, and the potential misuse of AI for disinformation and deepfake production. Its applications, he added, also raise complex legal and ethical questions that demand strict oversight.

Dr. Mohamed underscored that the human element remains fundamental, stressing that people must stay “in the loop,” reviewing and validating decisions. He advocated continuous training in analysis, ethics, critical thinking, transparency, and accountability.

He further recommended establishing clear limits for AI use, creating regulatory bodies, controlling access to sensitive technologies, and strengthening international cooperation to develop legal frameworks that safeguard democratic values.

       

Source: Aletihad - Abu Dhabi
Copyrights reserved to Aletihad News Center © 2026