#

US and UK will work together to test AI models for safety threats

Photo illustration of the shape of a brain on a circuitboard.
Cath Virginia / The Verge | Photos from Getty Images

The United States and United Kingdom agreed to work together to monitor advanced AI models for safety risks. The two countries will collaborate on research and do at least one joint safety test.

Both countries say safety is a top concern when it comes to using AI models. US President Joe Biden’s executive order on AI required companies developing AI systems to report safety test results. Meanwhile, UK Prime Minister Rishi Sunak announced the creation of the UK AI Safety Institute, saying that companies like Google, Meta, and OpenAI must allow the vetting of their tools.

US Commerce Secretary Gina Raimondo said the government is “committed to developing…

Continue reading…