One of Google’s top AI researchers has left the tech giant to focus on raising awareness about the potential risks of the rapidly advancing technology.
A top AI industry pioneer issued a major warning about the potential “dangers” of artificial intelligent technology as he announced his departure from Google.
Geoffrey Hinton’s leading work on neural networks has played a vital role in the development of AI systems that power many of today’s products.
“The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said in an interview with the New York Times. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Hinton’s decision to speak out comes amid growing concern among lawmakers, advocacy groups, and tech insiders about the potential for AI-powered chatbots to spread misinformation and displace jobs. The implementation of AI technology has witnessed remarkable growth in recent months.
In February, Microsoft revealed plans to offer large corporations the ability to create and tailor their own chatbots utilising ChatGPT technology, with a chat function set to be integrated into the Bing search engine and Edge browser through AI-powered search functions.
In contrast, Google has yet to specify a timeline for adding AI chatbot capabilities to its search engine.
However, experts say the world should be more concerned than fascinated by the technology.
Recent interest in AI technology, following the success of ChatGPT, has led to a major race among tech companies, including OpenAI, Microsoft, Google, IBM, Amazon, Baidu, and Tencent, to develop and deploy similar AI tools in their products.
In March, a group of prominent individuals in the tech industry signed a letter urging a temporary halt to the training of the most powerful AI systems for six months due to the perceived “profound risks” to humanity and society.
Despite working part-time for Google’s AI development team for ten years, Hinton has since become wary of the technology and his role in advancing it. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton told the New York Times.
Hinton claims that he left Google so that he could freely speak about the potential risks of AI, rather than criticising Google specifically.
According to a statement provided to CNN, Jeff Dean, Google’s chief scientist, acknowledged Geoffrey Hinton’s contributions to AI by stating that he has made “foundational breakthroughs.”
However, Dean said Google remains committed to adopting a responsible approach to AI, constantly learning to understand potential risks while pushing ahead with innovative ideas.
Hinton also noted that AI could improve healthcare but at the same time, it could also pave the way for developing lethal autonomous weapons, a possibility that he finds more pressing and frightening than the idea of robots taking over, which he believes is still far from becoming a reality.
There have been other instances at Google where concerns were raised about the potential dangers of AI.
In July, an engineer was terminated for claiming that an unpublished AI system had gained sentience, deemed a violation of employment and data security policies.
However, this assertion was met with skepticism and pushback from many within the AI community.