Geoffrey Hinton, a British-Canadian computer scientist known as the “godfather” of artificial intelligence (AI), has expressed concern that the technology could lead to the extinction of humanity within the next 30 years.
Professor Hinton, who won the Nobel Prize in Physics earlier this year for his work in this field, estimates there is a “10% to 20%” chance that AI will wipe out humanity over the next 30 years. This is an increase from his previous prediction of a 10% chance.
In an interview on BBC Radio 4’s Today programme, Mr Hinton was asked whether his views on a potential AI apocalypse had changed. He replied, “Not at all, 10% to 20%.” When asked if his odds had improved, Hinton said: You know, we’ve never had to deal with anything smarter than ourselves. ”
He goes on to say, “And how many examples do you know of something more intelligent being controlled by something less intelligent? There are very few examples. There’s a mother and her baby. In evolutionary theory, the baby is A lot of effort went into trying to get the mother under control, but this is the only example I know of.”
Hinton, who is also a professor emeritus at the University of Toronto, said humans are like infants compared to advanced AI systems. “I like to think of it like this: Picture yourself and a 3-year-old. We’re already 3-year-olds,” he said.
His concerns about the technology first became publicly known when he resigned from his position at Google in 2023 and spoke more freely about the dangers of unregulated AI development. . He warned that “bad actors” could misuse AI to cause harm.
Reflecting on the rapid progress in AI development, Hinton said: I thought at some point in the future we would get here. ”
He expressed concern that experts in the field predict that AI systems could become smarter than humans over the next 20 years, calling it “a very frightening idea.”
Hinton noted that the pace of development was “very fast, much faster than I expected,” and emphasized the need for government regulation. He warned that relying solely on large profit-seeking companies cannot guarantee safe development of AI. “The only thing that can force these big companies to do more safety research is government regulation,” he added.