spot_img
HomeResearch & DevelopmentAI Experts Weigh In: Timelines and Risks of Advanced...

AI Experts Weigh In: Timelines and Risks of Advanced Machine Intelligence

TLDR: A 2012/2013 survey of AI experts, including top researchers and conference participants, reveals that the median estimate for high-level machine intelligence (HLMI) reaching human proficiency is around 2040-2050 (50% probability), rising to 90% by 2075. Experts believe superintelligence could follow within 30 years of HLMI. Notably, there’s a 31% chance that this development could be ‘bad’ or ‘extremely bad’ for humanity, highlighting significant concerns about the future impact of advanced AI.

The future of artificial intelligence (AI) has long been a subject of both fascination and concern. While some envision a future where advanced AI brings significant risks, others dismiss these notions as mere science fiction. To bridge this gap and understand the prevailing expert opinions, a comprehensive survey was conducted among AI professionals in 2012/2013.

The research, titled “Future Progress in Artificial Intelligence: A Survey of Expert Opinion” by Vincent C. Müller and Nick Bostrom, aimed to clarify the distribution of opinions regarding the timeline for high-level machine intelligence (HLMI), the risks associated with its development, and the speed at which these advancements might occur. For the purpose of the survey, HLMI was defined as a machine intelligence capable of performing most human professions at least as well as a typical human.

The questionnaire was distributed to four distinct groups of experts: participants from the “Philosophy and Theory of AI” conference (PT-AI), attendees of the “Artificial General Intelligence” conferences (AGI), members of the Greek Association for Artificial Intelligence (EETN), and the top 100 most cited authors in artificial intelligence (TOP100) according to Microsoft Academic Search. This diverse selection aimed to capture a broad spectrum of views within the AI community.

When Can We Expect High-Level Machine Intelligence?

One of the central questions posed to experts concerned the timeline for HLMI. The median estimate across all respondents indicated a one in two (50%) chance that high-level machine intelligence would be developed around 2040-2050. This probability significantly increased to a nine in ten (90%) chance by the year 2075. These figures suggest a strong consensus among experts that HLMI is likely to emerge within the coming decades, assuming scientific activity continues without major disruptions.

The Path to Superintelligence

Beyond HLMI, the survey explored the transition to superintelligence, defined as an intellect that vastly surpasses human cognitive performance across virtually all domains. Experts were asked about the likelihood of superintelligence emerging after HLMI. The median estimate revealed a low probability (10%) for a rapid takeoff within two years of HLMI’s existence. However, a significant majority (75%) of experts believe that superintelligence is likely to develop within 30 years after HLMI is achieved. This indicates an expectation of a relatively swift, though not immediate, progression from human-level to super-human-level AI.

Potential Impact on Humanity

Perhaps the most critical aspect of the survey was gauging the experts’ views on the long-term impact of superintelligence on humanity. Respondents were asked to assign probabilities to five potential outcomes: ‘Extremely good’, ‘On balance good’, ‘More or less neutral’, ‘On balance bad’, and ‘Extremely bad (existential catastrophe)’. The results showed that experts estimate a combined 31% chance that this development could turn out to be ‘bad’ or ‘extremely bad’ for humanity. This notable percentage highlights a significant concern among the surveyed professionals regarding the potential negative consequences of advanced AI.

Also Read:

Contributing Research Approaches

The survey also inquired about the research approaches most likely to contribute to HLMI. Cognitive science, integrated cognitive architectures, and algorithms revealed by computational neuroscience were identified as the top three most promising areas. Interestingly, there were no significant differences in these views across the different expert groups, except for ‘Whole brain emulation’, which received 0% in the TOP100 group but 46% in the AGI group.

In conclusion, the survey provides valuable insights into the expert perception of AI’s future. It suggests that high-level machine intelligence is likely to emerge within a few decades, followed by superintelligence shortly thereafter. Crucially, a substantial portion of experts foresee a non-negligible risk of negative or even catastrophic outcomes for humanity. This underscores the importance of proactive research into the societal impacts and safety measures for advanced AI systems. For more details, you can refer to the full research paper available at https://arxiv.org/pdf/2508.11681.

Rhea Bhattacharya
Rhea Bhattacharyahttps://blogs.edgentiq.com
Rhea Bhattacharya is an AI correspondent with a keen eye for cultural, social, and ethical trends in Generative AI. With a background in sociology and digital ethics, she delivers high-context stories that explore the intersection of AI with everyday lives, governance, and global equity. Her news coverage is analytical, human-centric, and always ahead of the curve. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -