Posts

Biased Warnings: Examining the Risks of Unverified AI Speculation

The motivation for this blog stems from a recent article about Geoffrey Hinton, a recipient of the Nobel Prize in Physics and a renowned figure in artificial intelligence, who once again issued an alarmist warning about AI. According to reports from foreign media outlets, including the British daily The Guardian on December 27, 2024, Hinton appeared on BBC Radio, stating, "There is a possibility that humanity will go extinct within 30 years." He estimated a 10–20% chance that AI could destroy humanity within the next three decades and predicted that powerful AI, surpassing human capabilities, could emerge within 20 years and potentially gain control over humanity. A similar pattern was observed with the late Stephen Hawking, a celebrated physicist known for his work on black holes and the Big Bang theory, who also issued extreme warnings about AI without providing sufficient evidence. While Hinton’s groundbreaking academic contributions to AI are undisputed, his consistently...

Mathematics as the Invisible Architect: Bridging Natural Phenomena and Practical Applications

Mathematics: The Invisible Driver of Civilization Mathematics, alongside philosophy, has systematically shaped human cognitive abilities, driving the progress of human civilization. Over history, it has evolved to address societal needs while simultaneously advancing as an "invisible culture" through individual and collective intellectual efforts. Fundamental concepts such as distance and space have been refined with mathematical tools, enabling simplified representations of complex phenomena and fostering systematic understanding. Mathematical tools, including Maxwell's equations for electromagnetism, the Navier-Stokes equations for fluid dynamics, elasticity equations for material properties, and the heat conduction equation, are grounded in conservation laws. They describe relationships between physical quantities over time and space and have broad applications, such as fingerprint recognition, voice analysis, data compression, medical imaging, cryptography, animation,...

Advantages and Limitations of Deep Networks as Local Interpolators, Not Global Approximators

This blog addresses a common misconception in the mathematics community: the belief that deep networks can serve as global approximators of a target function across the entire input domain. I write this post to emphasize the importance of understanding the limitations of deep networks' global approximation capabilities, rather than blindly accepting such claims, and to highlight how their strengths as local interpolators can be effectively leveraged. To clarify, deep networks are fundamentally limited in their ability to learn most globally defined mathematical transforms, such as the Fourier transform, Radon transform, and Laplace transform , particularly in high-dimensional settings. (I am aware of papers claiming that deep networks can learn the Fourier transform, but these are limited to low-dimensional cases with small pixel counts.) The misconception often stems from the influence of the Barron space framework, which provides a theoretical basis for function approximation. Wh...

The Impact of Data-Driven Deep Learning Methods on Solving Complex Problems Once Beyond the Reach of Traditional Approaches

This blog is intended for mathematicians with limited background in physics and computational biology. Recent advancements in data-driven deep learning have transformed mathematics by enhancing—and sometimes surpassing—traditional methods. By leveraging datasets, deep learning techniques are redefining problem-solving and providing powerful tools to tackle challenges once considered impossible. This marks a new paradigm, driven by data, advanced computation, and adaptive learning, pushing the boundaries of what can be achieved. The profound impact of data-driven deep learning was recognized by the 2024 Nobel Prizes in Physics and Chemistry.  The Nobel Prize in Physics honored John Hopfield and Geoffrey Hinton for their groundbreaking contributions to neural networks. Hopfield developed an early model of associative memory in neural networks, known as the Hopfield network , which is based on the concept of energy minimization. The energy function is represented by: \[E(\mathbf{...

Exploring the Opportunities and Limitations of Generative Models in Medical Imaging

This blog explores the opportunities and limitations of generative models, including GANs and Diffusion models, in the field of medical imaging. Generative models like ChatGPT have undeniably achieved remarkable success in language modeling and the entertainment industry, where minor errors, omissions, or inaccuracies are less critical and can be easily corrected through human intervention and iterative refinement. The success of these data-driven generative models is anticipated to have a profound impact in the future, as they harness the collective wisdom of large datasets and efficiently tackle time-consuming, routine tasks. However, the requirements in the medical domain are far more stringent, with a heavy emphasis on accuracy and expert interpretation. For example, the expertise of a skilled specialist is far more valuable than the average opinion of a general practitioner, and there are countless patient-specific cases that cannot be adequately captured by collected data through...

Rethinking Innovation in Academia and R&D

Innovation and Academia: Reflections on Progress and Challenges Over four decades in academia, I have often encountered the recurring themes of innovation, reform, and the call for pioneering research and development (R&D). The mantra of "High Risk, High Return" has emphasized long-term vision over short-term gains. Yet, this relentless focus has led to widespread fatigue, as many innovative efforts remain confined to academic circles, rarely transitioning into practical, impactful industrial applications. Academics who achieve breakthroughs often lack the resources or expertise to transform them into successful commercial products. Furthermore, industry-academia collaborations frequently fall short due to subtle yet pervasive challenges, leaving promising innovations as mere line items on résumés rather than societal advancements. Lessons from History: The Role of Constraints in Driving Innovation Developing effective R&D policies that encourage both technological an...

Utilizing Implicit Neural Representations for Solving Ill-posed Inverse Problems

Image
Recently, the field of medical imaging has witnessed numerous attempts aimed at producing high-resolution images with significantly insufficient measured data. These endeavors are motivated by a variety of objectives, such as reducing data acquisition times, enhancing cost efficiency, minimizing invasiveness, and elevating patient comfort, among other factors. Nevertheless, these efforts necessitate tackling severely ill-posed inverse problems, due to the significant imbalance between the number of unknown variables (needed for desired resolution) and the number of available equations (derived from measured data). For a clearer understanding, let's examine a linear system represented by $\mathbf{A}I = \mathbf{b}_{I} + \mathbf{\epsilon}$ , where $\mathbf{A}$ represents an $m \times n$ matrix with a highly underdetermined scenario ( $m \ll n$ ). This matrix $\mathbf{A}$ serves as a linearized forward model. In this formulation, $I$ is an $n$-dimensional vector representing the imag...