Google DeepMind co-founder and CEO Demis Hassabis has warned of two urgent risks posed by artificial intelligence (AI). The tech giant’s AI CEO revealed that these two risks are bad actors weaponising beneficial technologies and autonomous systems doing things their designers never intended. Hassabis has also called for international cooperation to set minimum standards before existing institutions are overwhelmed. Speaking at the India AI Impact Summit, Hassabis said the growing autonomy of AI systems could increase both their usefulness and potential risks. “As the systems become more autonomous, more independent, they’ll be more useful, more agent-like but they’ll also have more potential for risk and doing things that maybe we didn’t intend when we designed them,” he said during a Bloomberg Television interview. He added that existing global institutions may not yet be equipped to manage the pace and scale of AI development, noting the technology’s cross-border impact.
“It’s digital, so it means it’s going to affect everyone in the world, probably, and it’s going to cross borders,”Hassabis noted, stressing that forums bringing together policymakers and technologists are necessary. “There has to be some element of international cooperation, or maybe at least minimum standards around how these technologies should be deployed,”he added.
What Google AI CEO Demis Hassabis said about AGI at the India AI Impact Summit
Hassabis said that artificial general intelligence (AGI), about which OpenAI CEO Sam Altman is ‘very excited’, remains out of reach. He cited three key limitations in current AI systems and said, “I don’t think we are there yet.”His remarks stand in contrast to OpenAI’s long-stated ambition of achieving AGI. OpenAI CEO Sam Altman has argued that “superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own and, in turn, massively increase abundance and prosperity.”Hassabis identified the first gap as the absence of continual learning. Current models are largely fixed after training and cannot adapt in real time. “What you’d like is for those systems to continually learn online from experience, to learn from the context they’re in, maybe personalise to the situation and the tasks that you have for them,”he noted.The second limitation, he said, is in long-term reasoning. “They can plan over the short term, but over the longer term, the way that we can plan over years, they don’t really have that capability at the moment,”Hassabis highlighted.The third is inconsistency. Hassabis noted that current systems can excel at complex tasks while stumbling on simple ones. “Today’s systems can get gold medals in the international Math Olympiad, really hard problems, but sometimes can still make mistakes on elementary maths if you pose the question in a certain way. A true general intelligence system shouldn’t have that kind of jaggedness,” Hassabis explained.Despite these reservations, Hassabis said in a 2024 interview that he expects true AGI to arrive within five to ten years.





