Seoul (AFP)

More than a dozen of the world's leading artificial intelligence firms made fresh safety commitments at a global summit in Seoul on Tuesday, the British government said in a statement.

The agreement with 16 tech firms, which includes ChatGPT-maker OpenAI, Google DeepMind and Anthropic, builds on the consensus reached at the inaugural global AI safety summit at Bletchley Park in Britain last year.

"These commitments ensure the world's leading AI companies will provide transparency and accountability on their plans to develop safe AI," UK Prime Minister Rishi Sunak said in a statement released by Britain's Department for Science, Innovation and Technology.

Under the agreement, the AI firms that still need to share how they assess the risks of their technology will publish those frameworks, according to the statement.

These will include what risks are "deemed intolerable" and what the firms will do to ensure these thresholds are not crossed.

"In the most extreme circumstances, the companies have also committed to 'not develop or deploy a model or system at all' if mitigations cannot keep risks below the thresholds," the statement added.

The definition of these thresholds will be decided ahead of the following AI summit, which is due to be hosted by France in 2025.

The firms that have agreed on the safety rules also include US tech titans Microsoft, Amazon, IBM and Instagram parent Meta; France's Mistral AI; and Zhipu.ai from China.