Posts

Showing posts from December, 2024

Biased Warnings: Examining the Risks of Unverified AI Speculation

The motivation for this blog stems from a recent article about Geoffrey Hinton, a recipient of the Nobel Prize in Physics and a renowned figure in artificial intelligence, who once again issued an alarmist warning about AI. According to reports from foreign media outlets, including the British daily The Guardian on December 27, 2024, Hinton appeared on BBC Radio, stating, "There is a possibility that humanity will go extinct within 30 years." He estimated a 10–20% chance that AI could destroy humanity within the next three decades and predicted that powerful AI, surpassing human capabilities, could emerge within 20 years and potentially gain control over humanity. A similar pattern was observed with the late Stephen Hawking, a celebrated physicist known for his work on black holes and the Big Bang theory, who also issued extreme warnings about AI without providing sufficient evidence. While Hinton’s groundbreaking academic contributions to AI are undisputed, his consistently...

Mathematics as the Invisible Architect: Bridging Natural Phenomena and Practical Applications

Mathematics: The Invisible Driver of Civilization Mathematics, alongside philosophy, has systematically shaped human cognitive abilities, driving the progress of human civilization. Over history, it has evolved to address societal needs while simultaneously advancing as an "invisible culture" through individual and collective intellectual efforts. Fundamental concepts such as distance and space have been refined with mathematical tools, enabling simplified representations of complex phenomena and fostering systematic understanding. Mathematical tools, including Maxwell's equations for electromagnetism, the Navier-Stokes equations for fluid dynamics, elasticity equations for material properties, and the heat conduction equation, are grounded in conservation laws. They describe relationships between physical quantities over time and space and have broad applications, such as fingerprint recognition, voice analysis, data compression, medical imaging, cryptography, animation,...

Advantages and Limitations of Deep Networks as Local Interpolators, Not Global Approximators

This blog addresses a common misconception in the mathematics community: the belief that deep networks can serve as global approximators of a target function across the entire input domain. I write this post to emphasize the importance of understanding the limitations of deep networks' global approximation capabilities, rather than blindly accepting such claims, and to highlight how their strengths as local interpolators can be effectively leveraged. To clarify, deep networks are fundamentally limited in their ability to learn most globally defined mathematical transforms, such as the Fourier transform, Radon transform, and Laplace transform , particularly in high-dimensional settings. (I am aware of papers claiming that deep networks can learn the Fourier transform, but these are limited to low-dimensional cases with small pixel counts.) The misconception often stems from the influence of the Barron space framework, which provides a theoretical basis for function approximation. Wh...