spot_img
HomeNews & Current EventsGoogle DeepMind CEO Demis Hassabis Affirms Current AI Lacks...

Google DeepMind CEO Demis Hassabis Affirms Current AI Lacks Consciousness, Stresses Ethical Development

TLDR: Demis Hassabis, co-founder and CEO of Google DeepMind, has stated that current artificial intelligence systems do not possess self-awareness or consciousness. While acknowledging the theoretical possibility of future AI acquiring such capabilities, Hassabis emphasized the critical need for AI to remain safe, aligned with human values, and under human control as it becomes more autonomous. He likened the process of instilling ethics in AI to raising a child, highlighting the importance of careful guidance and built-in safety measures.

In a recent interview, Demis Hassabis, the visionary co-founder and CEO of Google DeepMind, firmly asserted that contemporary artificial intelligence systems are devoid of self-awareness or consciousness. This statement comes amidst ongoing global discussions about the rapid advancements and future implications of AI technology.

Hassabis, a leading figure in the AI research community, clarified his stance during an interview with CBS’s Scott Pelley. He stated, ‘I don’t think any of today’s systems feel self-aware or conscious in any way.’ This provides a clear distinction between the sophisticated capabilities of current AI models and genuine subjective awareness or self-experience.

However, Hassabis did not entirely dismiss the long-term potential for AI to evolve in this direction. He acknowledged, ‘These systems might acquire some feeling of self-awareness. That is possible.’ He further elaborated that while the explicit goal of DeepMind’s work is not to create conscious AI, such a development could potentially occur implicitly as systems become more advanced. He highlighted the importance for AI systems to grasp concepts of ‘self’ and ‘other’ as a foundational step towards more complex cognitive abilities.

A central theme of Hassabis’s remarks was the paramount importance of ensuring AI systems remain safe, aligned with human values, and firmly under human control, especially as they gain greater autonomy. He drew a compelling analogy, suggesting that ‘instilling ethics in AI must be approached with the same care and intentionality as raising a child.’ This perspective underscores the need for meticulous guidance and the integration of robust safety limits into AI systems from their inception.

‘Can we make sure that they are doing what we want, that they benefit society, and that they stay on guardrails—with safety limits built into the system?’ Hassabis questioned, emphasizing the proactive measures required for responsible AI development. He explained that DeepMind’s AI models, such as Gemini, are designed with architectural goals but acquire their vast capabilities through data-driven learning, mimicking human learning processes rather than being explicitly programmed for consciousness.

Also Read:

Looking ahead, Hassabis has previously predicted the advent of Artificial General Intelligence (AGI) within the next five to ten years, envisioning systems by 2030 that deeply understand their environment and are seamlessly integrated into daily life. This projection further amplifies the urgency of establishing strong ethical frameworks and governance for AI.

Dev Sundaram
Dev Sundaramhttps://blogs.edgentiq.com
Dev Sundaram is an investigative tech journalist with a nose for exclusives and leaks. With stints in cybersecurity and enterprise AI reporting, Dev thrives on breaking big stories—product launches, funding rounds, regulatory shifts—and giving them context. He believes journalism should push the AI industry toward transparency and accountability, especially as Generative AI becomes mainstream. You can reach him out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -