Home Science AI 'godfather' Yoshua Bengio feels 'lost' over life's work

AI 'godfather' Yoshua Bengio feels 'lost' over life's work

by news

This video can not be played
Watch: 'AI godfather' likens his emotions to atom bomb inventors
One of the so-called "godfathers" of Artificial Intelligence (AI) has said he would have prioritised safety over usefulness had he realised the pace at which it would evolve.
Prof Yoshua Bengio told the BBC he felt "lost" over his life's work.
The computer scientist's comments come after experts in AI said it could lead to the extinction of humanity.
Prof Bengio, who has joined calls for AI regulation, said he did not think militaries should be granted AI powers.
He is the second of the so-called three "godfathers" of AI, known for their pioneering work in the field, to voice concerns about the direction and the speed at which it is developing.
AI describes the ability of computers to perform tasks so complex, they have previously required human intelligence to complete.
A recent example has been the development of AI-powered chatbots, like ChatGPT, which appear to give human-like responses to questions.
This has led to planned European Union legislation on AI. And on Wednesday, the bloc's technology chief, Margrethe Vestager, said a voluntary code of conduct for AI could be created "within the next weeks".
Some fear that advanced computational ability could be used for harmful purposes, such as the development of deadly new chemical weapons.
Prof Bengio told the BBC he was concerned about "bad actors" getting hold of AI, especially as it became more sophisticated and powerful.
"It might be military, it might be terrorists, it might be somebody very angry, psychotic. And so if it's easy to program these AI systems to ask them to do something very bad, this could be very dangerous.
"If they're smarter than us, then it's hard for us to stop these systems or to prevent damage," he added.
Prof Bengio admitted those concerns were taking a personal toll on him, as his life's work, which had given him direction and a sense of identity, was no longer clear to him.
"It is challenging, emotionally speaking, for people who are inside [the AI sector]," he said.
"You could say I feel lost. But you have to keep going and you have to engage, discuss, encourage others to think with you."
The Canadian has signed two recent statements urging caution about the future risks of AI. Some academics and industry experts have warned that the pace of development could result in malicious AI being deployed by "bad actors" to actively cause harm – or choosing to inflict harm by itself.
Fellow "godfather" Dr Geoffrey Hinton has also signed the same warnings as Prof Bengio, and retired from Google recently saying he regretted his work.
The third "godfather", Prof Yann LeCun, who along with Prof Bengio and Dr Hinton won a prestigious Turing Award for their pioneering work, has said apocalyptic warnings are overblown.
Twitter and Tesla owner Elon Musk has also voiced his concerns.
"I don't think AI will try to destroy humanity, but it might put us under strict controls," he said recently at an event hosted by the Wall Street Journal.
"There's a small likelihood of it annihilating humanity. Close to zero but not impossible."
Prof Bengio told the BBC all companies building powerful AI products needed to be registered.
"Governments need to track what they're doing, they need to be able to audit them, and that's just the minimum thing we do for any other sector like building aeroplanes or cars or pharmaceuticals," he said.
"We also need the people who are close to these systems to have a kind of certification… we need ethical training here. Computer scientists don't usually get that, by the way."
But not everybody in the field believes AI will be the downfall of humans – others argue that there are more imminent problems which need addressing.
Dr Sasha Luccioni, research scientist at the AI firm Huggingface, said society should focus on issues like AI bias, predictive policing, and the spread of misinformation by chatbots which she said were "very concrete harms".
"We should focus on that rather than the hypothetical risk that AI will destroy humanity," she added.
There are already many examples of AI bringing benefits to society. Last week an AI tool discovered a new antibiotic, and a paralysed man was able to walk again just by thinking about it, thanks to a microchip developed using AI.
This video can not be played
Watch: AI 'godfather' Geoffrey Hinton tell the BBC of AI dangers as he quits Google
But this is juxtaposed with fears about the far-reaching impact of AI on countries' economies. Firms are already replacing human staff with AI tools, and it is a factor in the current strike under way by scriptwriters in Hollywood.
"It's never too late to improve," says Prof Bengio of AI's current state. "It's exactly like climate change.
"We've put a lot of carbon in the atmosphere. And it would be better if we hadn't, but let's see what we can do now."
Follow Zoe Kleinman on Twitter @zsk
What are your questions about artificial intelligence?
Is artificial intelligence a good idea?
AI could lead to extinction, experts warn
AI chatbots 'may soon be more intelligent than us'
Advanced AI risk to humanity – technology leaders
AI could affect 300 million jobs – report
More than 260 dead in India three-train crash
‘I survived but many died around me’
Mystery of spy deaths in Italian boat accident
Mystery of spy deaths in Italian boat accident
Do Russians really hate the West? Video
Anger in Belgium over verdict in student's hazing death
Beijing's comedy crackdown is hitting its music scene
Tracking the rise of Russia’s missile strikes on Kyiv
Why Indian politicians woo the diaspora in the US
Putin – South Africa's big headache
'There's no one way to be autistic' Video
Five ways to stay as healthy as the Dutch. Video
The rise of the 'no-wash' movement
Why the city that never sleeps is slowly sinking
The generation clocking the most hours
© 2023 BBC. The BBC is not responsible for the content of external sites. Read about our approach to external linking.

source

Related Posts