MAYS IBRAHIM (ABU DHABI)

The Summit 333 for Al Safety commenced on Sunday at New York University Abu Dhabi (NYUAD), gathering leading global experts to advance dialogue around Al safety, ethics and governance. The three-day event runs until November 11 with the final day held across both NYUAD and the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI).

In a keynote address, Dr Saeed Aldhaheri, Director of the Centre for Future Studies at the University of Dubai and adviser to the World Economic Forum, called for a human-centred approach to AI built on ethics, transparency and trust.

Aldhaheri warned that as nations rush to integrate AI into education, healthcare and government services, they risk reproducing societal biases, eroding accountability and amplifying misinformation through deepfakes and algorithmic errors.

“AI will not shape society, we will shape society,” he said, urging governments to embed ethics “by design” before the technology reaches a new tipping point.

The UAE, he noted, has already made “responsible AI” a national priority through its AI Strategy 2031, which includes ethical guidelines and an AI Seal certifying compliance for models developed or deployed locally.

‘Slow Down’ Before It’s Too Late

Echoing Aldhaheri’s call for caution, Lia Lungu, Global Governance and Strategy Expert at Family Office & Capital Resilience, urged world leaders to “slow down” – not to halt innovation, but to allow space for ethical reflection and architectural redesign.

“We have to rethink what governance means,” she said, arguing that a universal, rules-based AI safety framework may be impossible. Instead, Lungu suggested that governance should be embedded directly into AI systems architectures.

Lungu believes that complete alignment between AI and human values is a mathematical impossibility. She warned that societies risk deploying systems capable of self-improvement before understanding their trajectories.

“Most of AGI is already invented,” she said. “The question now is whether deployment can be slowed until we know how to use it safely.”

Shadab Hussain, Lead Engineer at MathCo, said that although “the race will not slow down”, countries can still build safeguards at national and regional levels.
Naiyarah Hussain, West & Central Asia Lead at AI Safety Asia, outlined a practical roadmap for governments to build AI readiness through phased adoption from literacy and certification to risk management and security.

Her framework includes integrating AI education into national curricula, conducting “red-team” testing for vulnerabilities, and developing policies to address deepfakes, data leaks and multi-agent risks.

Hussain maintained that leaders must continuously update and adapt AI safety frameworks. Aside from the technology, governance must address mental health, security, diversity, and access to resources such as water, energy, and computing infrastructure, she said.

The Job Equation

Beyond governance, speakers also addressed the profound impact of AI on employment.

Dr Aldhaheri underscored the need to “skill and reskill” societies to ensure that AI augments, rather than replaces, human productivity.

Hussain echoed that view, noting that automation is already disrupting operational sectors.

Governments need to start with AI literacy and implement upskilling policies that define what each profession must learn to remain employable in the AI era, he said. 
Lungu, meanwhile, offered a broader economic view, describing AI as a catalyst for “a redesign of capitalism”.

While dystopian scenarios involve mass unemployment and social instability, she believes the most likely outcome is a temporary shock followed by a rebalanced economy.

She also suggested that AI-driven abundance could eventually push governments to explore universal benefit models, moving beyond a purely wage-based economic system.