Biased Warnings: Examining the Risks of Unverified AI Speculation

The motivation for this blog stems from a recent article about Geoffrey Hinton, a recipient of the Nobel Prize in Physics and a renowned figure in artificial intelligence, who once again issued an alarmist warning about AI. According to reports from foreign media outlets, including the British daily The Guardian on December 27, 2024, Hinton appeared on BBC Radio, stating, "There is a possibility that humanity will go extinct within 30 years." He estimated a 10–20% chance that AI could destroy humanity within the next three decades and predicted that powerful AI, surpassing human capabilities, could emerge within 20 years and potentially gain control over humanity. A similar pattern was observed with the late Stephen Hawking, a celebrated physicist known for his work on black holes and the Big Bang theory, who also issued extreme warnings about AI without providing sufficient evidence.

While Hinton’s groundbreaking academic contributions to AI are undisputed, his consistently alarmist warnings over the past nine years have raised questions about his judgment. For instance, in 2016, Hinton declared, “We should stop training radiologists right now. It’s obvious that within five years, deep learning will outperform radiologists.” This statement failed to account for the limitations of deep learning and the nuanced complexities of interpreting medical images, which require more than computational precision. Even today, AI faces significant challenges before it can be widely trusted and adopted in clinical practice. Such oversimplifications of AI’s potential have contributed to notable failures, such as IBM Watson’s widely publicized shortcomings in healthcare.

The purpose of this blog is not to debate the validity of opinions expressed by world-class experts. Similarly, I do not intend to address the risks of totalitarianism governed algorithmically by AI. While it is natural for prominent figures to attract media attention, problems arise when their claims are accepted as absolute truths without proper scrutiny by scholars and influencers and are further amplified through media and foreign outlets. This dynamic has a detrimental impact not only on public perception but also on the integrity of academic discourse. 

In particular, professors who generalize based on a few isolated facts often fail to recognize the superficiality of their understanding. By presenting themselves as experts in public forums, they actively promote biased claims that mislead the public. Such behavior not only confuses the general audience but also undermines the credibility of genuine expert discourse.

Numerous examples show that even academically brilliant individuals can sometimes exhibit extreme and impractical behavior. A notable case is Hinton’s protégé, Dr. Ilya Sutskever, who spearheaded the controversial 2023 firing of OpenAI CEO Sam Altman and President Greg Brockman. This decision was widely criticized as rash and disconnected from reality, raising concerns about its potential to destabilize public trust in leadership and innovation. Sutskever appears to have overlooked the fact that excellence and distinction in academia do not necessarily translate to practical leadership in the real world. His actions reflect a tendency to behave more like an irresponsible activist than a responsible leader—a trait not uncommon among professors in academia.

In today’s news (Dec 31,2024), a top Google product leader, aligning with Sutskever's views, claimed that a direct approach to achieving artificial superintelligence (ASI) is becoming increasingly feasible due to advancements in scaling test-time compute. Frankly, I don't trust this claim, and it's possible that the public is being intentionally misled. For example, until 2017, the commercialization of AI-powered vehicles was announced as 'possible,' and in 2018, Waymo partnered with Jaguar to develop self-driving cars by 2020. While Waymo has made significant progress, including launching a 24-hour robotaxi service in San Francisco using driverless Jaguar I-Paces, the widespread deployment of fully autonomous vehicles has faced numerous delays and challenges. Reliable autonomous driving requires addressing complex real-world scenarios, such as navigating urban environments with unpredictable pedestrian and pet movements. Experts have long acknowledged that despite advances in AI technology, fundamental limitations remain in responding to complex and sudden situations, making self-driving cars feasible only in highly controlled or limited environments. However, media coverage over the past decade has often presented an overly optimistic narrative, fueling public misconceptions about the state of AI and its capabilities.

In data-driven deep learning, having more data generally leads to a more capable and accurate network. However, this holds true only up to a certain point. Beyond that, increasing the amount of data does not necessarily guarantee further improvements, as performance gains often follow a logarithmic pattern or plateau without significant breakthroughs. Current deep learning networks primarily excel at interpolating within the data distribution they are trained on, rather than robustly approximating a target function across diverse or unpredictable scenarios. As a result, their performance is often limited to controlled or idealized environments and tends to degrade when applied to real-world settings with dynamic and unpredictable data distributions. While AI can outperform humans in highly specific and limited tasks, such as calculations, I believe it is unlikely to surpass human capabilities across a wide range of real-world environments within our lifetimes.

Although I maintain a critical view of AI, I fully acknowledge the transformative impact and undeniable achievements of deep learning, which has surpassed traditional methods by leveraging datasets and adaptive computation to tackle challenges once considered impossible. However, my critique is directed at the excessive overestimation of AI's capabilities, which often overshadows its limitations and risks misleading both the public and academia.

As a scholar who recently retired from academia, I have witnessed countless instances where successes in limited contexts or early-stage results were exaggerated and misrepresented in the news. For example, in 1994, an early-stage result from autonomous vehicle research in the lab next to mine was overly hyped and reported as if the era of self-driving cars was imminent. While recent advancements in autonomous vehicles are remarkable, achieving fully autonomous cars capable of navigating complex environments with mixed traffic, pedestrians, pets, and other unpredictable factors remains an entirely different challenge—one that is unlikely to be fully realized within my lifetime.

Similarly, in the early 2000s, Sony’s robot, which could barely walk, was portrayed in the media as if the era of robots had arrived. More recently, the appearance of a robot capable of skillfully playing table tennis has led some to speculate that robots surpassing human abilities are on the horizon. While building robots that can dance and play table tennis is impressive, the leap to creating robots that genuinely behave like humans is a far greater challenge—one that also seems unlikely to be achieved within my lifetime. Robots may indeed outperform humans in very limited and controlled environments. However, in complex, interconnected situations that require repeated trial and error, adaptive responses, and decision-making under ambiguous causal relationships and unclear boundaries, it remains exceedingly difficult for robots to surpass human capabilities. In particular, while recent advances in electronic skin (e-skin) have enabled sensors capable of detecting pressure, temperature, and even texture, replicating the full complexity of human skin—integrating multiple functionalities—remains a monumental challenge that may take well over a century to achieve.

Despite the limited capabilities of AI in decision-making, its immense computational power allows it to optimize objectives and outperform humans in narrowly defined tasks. The real threat lies in the excessive trust humans place in its decisions. Problems arise when small variations in input data push AI systems beyond their training distribution, leading to critical errors. Furthermore, input data can often be manipulated to deliberately induce incorrect decisions. Highlighting these vulnerabilities in discussions about AI threats is not only meaningful but essential for ensuring the responsible and safe integration of AI into society.

As I conclude this blog, which has garnered little attention, I find myself reflecting on how best to address these issues effectively. Why do thoughtful, "buzz-killing" analyses like this fail to capture public interest, while exaggerations and conspiracies spread effortlessly, often being accepted as truth despite their inaccuracies? How can we explain that activists who once warned of the dangers of electromagnetic waves now casually hold smartphones—emitting far stronger electromagnetic waves—against their heads? Why do people express alarm over Fukushima's contaminated water, thousands of kilometers away, claiming it endangers us (despite negligible effects when accounting for the divergence theorem and diffusion theory, where the impact diminishes inversely with the square of the distance), while neglecting the pollution caused by their own waste?


XXXX

XXXXX

 

Under Construction

XXXXX

XXXXXX

P.S. This blog is somewhat related to my concern about an emerging trend where reliance on party-centric, economically dependent lifestyles is increasing, rather than encouraging participation in essential work necessary for societal survival. An excessive number of university graduates, PhD holders, social activists, clergy, and influencers in a country may disrupt the balance of the overall survival system. This imbalance places an increasing strain on society, as the limited fruits of productive labor must be shared among a growing number of individuals who contribute minimally or indirectly to sustaining essential functions. Furthermore, with the proportion of retired individuals rising rapidly, these challenges are further exacerbated. This demographic shift places significant pressure on social systems, forcing future generations to shoulder an ever-increasing burden to maintain society. Without addressing these imbalances, the long-term stability and productivity of society could be at serious risk.

Comments

Popular posts from this blog

Exploring the Fundamentals of Diffusion Models

Optimizing Data Simplification: Principal Component Analysis for Linear Dimensionality Reduction

University Education: Reflecting Life's Complexities and Challenges