Friday, July 19, 2024
Hometechnology"Godfather of AI" just issued an ominous warning for the future of...

"Godfather of AI" just issued an ominous warning for the future of humanity

Artificial intelligence (AI) systems could someday surpass humans to become the most intelligent species on Earth, according to Geoffrey Hinton, the computer scientist known as the “Godfather of AI.”

Hinton worked at Google for several years but announced this spring that he was leaving so he could speak openly about the potential benefits and risks of AI. He discussed both during a recent interview with Scott Pelley of CBS News’ 60 Minutes.

Hinton told Pelley AI could offer “huge benefits” when it comes to health care and drug development. But there are also several AI-related risks that concern Hinton. He first identified the jobs that could disappear in these industries if AI systems are able to take on these complex tasks.

Hinton also warned of the potential for “fake news” to spread through AI, and for AI to create new biases in law enforcement procedures and employment processes. There’s also a “serious worry” that AI systems could create and implement their own computer codes, Hinton said, which in theory means they could update themselves.

Much is still unknown about AI’s potential. But people around the world are already testing out popular systems like OpenAI’s ChatGPT, which recently unveiled new features that enable the tool to respond to visual and audio data that users upload directly to the tool. Users have deployed it to solve equations, decipher traffic signs and identify films based on single screenshots, among other things.

The manner and speed at which these tools respond to data suggest they may learn more efficiently or thoroughly than humans do. The most advanced chatbots currently operating have about one connection for every 100 a human’s brain has, but the chatbots appear to know “far more than you do,” Hinton said in the interview.

Hinton contrasted AI development with other advancements in technology, which he said benefited from the ability to fail early on without serious repercussions. But with AI, “we can’t afford to get it wrong with these things,” he said. When Pelley asked for clarification, Hinton said this was because the systems “might take over,” later adding this is not a guarantee but “a possibility” that could be avoided if humans find a way to prevent AI systems from wanting to do so.

It’s unclear how much time it’ll take for these larger AI questions to be answered, but Hinton estimated that ChatGPT in particular “may well be able to reason better than us” before the end of this decade. Military use of AI also has a more specific timeline, with retired U.S. General Mark Milley recently telling 60 Minutes that 20 percent or more of “sophisticated” militaries could become robotic in “maybe 15 years or so.” The U.S. Department of Defense currently requires all military decisions to involve a human, he added.

The use of AI systems by armed forces is a concerning one for many. Earlier this year, the International Committee of the Red Cross (ICRC) issued a plea for world leaders to create a new set of international rules for automated weapons systems. These systems pose risks both to civilians on the ground and to the troops that deploy them, the committee said.

Newsweek reached out to the International Committee of the Red Cross for comment on Monday through the committee’s online submission form.

Hinton isn’t alone in raising concerns about AI development. Earlier this year, several tech leaders signed an open letter calling for a temporary halt to some advanced AI development efforts, which the letter said could “pose profound risks to society and humanity.”

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

A couple of months after the letter was published, OpenAI CEO Sam Altman advocated for AI regulation while testifying before the U.S. Congress. Though Altman said AI is “improving people’s lives,” his prepared remarks before testifying said the company “can’t anticipate every beneficial use, potential abuse, or failure of the technology.”

“OpenAI believes that regulation of AI is essential, and we’re eager to help policymakers as they determine how to facilitate regulation that balances incentivizing safety while ensuring that people are able to access the technology’s benefits,” Altman said.

- Advertisment -

Most Popular