Some powerful artificial general intelligence (AGI) systems may eventually have to be banned, a member of the government's AI Council says.
Marc Warner, also boss of Faculty AI, told the BBC that AGI needed strong transparency and audit requirements as well as more inbuilt safety technology.
And the next six months to a year would require "sensible decisions" on AGI.
His comments follow the EU and US jointly saying a voluntary code of practice for AI was needed soon.
The AI Council is an independent expert committee which provides advice to government and leaders in artificial intelligence.
Faculty AI says it is OpenAI's only technical partner helping its customers safely implement ChatGPT and its other products into their systems.
The company's tools helped forecast demand for NHS services during the pandemic – but its political connections have attracted scrutiny.
Mr Warner added his name to a Center for AI Safety warning the technology could lead to the extinction of humanity. And Faculty AI was among technology companies whose representatives discussed the risks, opportunities and rules needed to ensure safe and responsible AI with Technology Minister Chloe Smith, at Downing Street, on Thursday.
AI describes the ability of computers to perform tasks typically requiring human intelligence.
"Narrow AI" – systems used for specific tasks such as translating text or searching for cancers in medical images – could be regulated like existing technology, Mr Warner said.
But AGI systems, a fundamentally novel technology, were much more worrying and would need different rules.
"These are algorithms that are aimed at being as smart or smarter than a human across a very broad domain of tasks – essentially, every task," Mr Warner added.
Humanity was in its position of primacy on this planet primarily because of its intelligence, he said.
"If we create objects that are as smart or smarter than us, there is nobody in the world that can give a good scientific justification of why that should be safe," Mr Warner said.
"That doesn't mean for certain that it's terrible – but it does mean that there is risk, it does mean that we should approach it with caution.
"At the very least, there needs to be sort of strong limits on the amount of compute [processing power] that can be arbitrarily thrown at these things.
"There is a strong argument that at some point, we may decide that enough is enough and we're just going to ban algorithms above a certain complexity or a certain amount of compute.
"But obviously, that is a decision that needs to be taken by governments and not by technology companies".
Some say concerns around AGI are distracting from problems with existing technologies – bias in AI recruitment or facial-recognition tools, for example.
But Mr Warner said this was like saying: "'Do you want cars or aeroplanes to be safe?' I want both."
Others say too much regulation might make the UK less attractive to investors and stifle innovation.
But Mr Warner said the UK could find a competitive advantage in encouraging safety.
"My long-term bet is that actually, to get value out of the technology, you need the safety – in the same way to get value out of the aeroplane, you need the engines to work," he said.
The UK's recent White Paper on regulating AI was criticised for failing to set up a dedicated watchdog.
But Prime Minister Rishi Sunak has outlined the need for "guardrails" and said the UK could play "a leadership role".
On Wednesday, US Secretary of State Antony Blinken and European Union Commissioner Margrethe Vestager said voluntary rules were needed quickly.
The EU Artificial Intelligence Act, which will be among the first to regulate AI, is still going though legislative processes.
And Ms Vestager said it would take two to three years for different pieces of legislation to come into effect – "and we're talking about a technological acceleration that is beyond belief".
But industry and others would be invited to contribute to a draft voluntary code of conduct within weeks.
After a meeting of the fourth US-EU Trader and Technology Council, Mr Blinken said it was important to establish voluntary codes of conduct "open to" a "wide universe of countries… all likeminded countries".
AI could lead to extinction, experts warn
Sam Altman: OpenAI has no plans to leave Europe
Floods sweep region after huge Ukraine dam destroyed
Russia and Wagner clash over Ukraine attack claims
Harry: I couldn't trust anyone due to phone hacking
What we know about Ukraine dam incident
'Thicko, cheat, underage drinker' – Key extracts from Prince Harry's statement
The fake job that snared FBI agent spying for Moscow
Can UK’s Storm Shadow missiles change Ukraine war?
Why Putin has put this religious art on display
The one thing Mike Pence needs to beat his old boss
Who is no longer world's richest? Our quickfire quiz…
Jason Derulo makes 'unsexy' investment in car wash
Why personalised medicine hasn't arrived
The rise of the 'no-wash' movement
Why the city that never sleeps is slowly sinking
The generation clocking the most hours
© 2023 BBC. The BBC is not responsible for the content of external sites. Read about our approach to external linking.
Powerful artificial intelligence ban possible, government adviser warns
previous post