Posts

Synthetic Paired Data Generation for Medical Imaging: Bridging the Gap Toward Faithfully Reproducing Patient-Dependent Conditional Structure

The performance of supervised learning in digital medical imaging modalities such as ultrasound and low-dose CBCT depends critically on the availability of paired datasets. These datasets must capture variability across patients, anatomical structures, and disease presentations, while providing accurate and consistent labels aligned with the measured images. Diagnostic tasks—including segmentation and detection—are particularly dependent on such paired data, requiring reliable annotations such as lesion localization, bounding regions, and clinically meaningful diagnostic labels. Consequently, robust model training requires large-scale datasets with high-quality annotations spanning diverse patient populations. However, in real clinical settings, such high-quality paired datasets are often unavailable due to the limited representation of abnormal cases, the absence of ground truth, inter-observer variability in annotations, patient-specific image heterogeneity, and the inherent variabil...

The Misallocation of Mathematical Talent: A Structural Perspective

This blog examines a recurring pattern that has persisted in mathematics since the mid-20th century. A substantial fraction of highly capable researchers devote their efforts to extending or resolving longstanding theoretical problems inherited from earlier generations. This, in itself, is not surprising—mathematics is inherently cumulative, and deep problems often require decades of sustained attention. What is striking, however, is the scale of this concentration. Today, the global population of mathematicians exceeds, by a wide margin, the total number that existed prior to the mid-20th century. At the same time, the set of mathematically grounded problems emerging from modern society—ranging from medical imaging and data-driven modeling to complex systems and engineering constraints—has expanded dramatically. Yet a significant portion of mathematical effort remains focused on classical, internally defined questions rather than on these rapidly growing external demands. At first gla...

Beyond “Failure Tolerance”

 In recent years, many discussions of innovation policy have emphasized the need to “tolerate failure.” While I strongly agree with this principle, I worry that the slogan risks diverting attention from a more fundamental issue: the structure of research evaluation and the design of public R&D investment. Encouraging risk-taking alone does not explain why the descendants of successful entrepreneurs in countries such as Japan and Korea often become effective long-term managers and R&D investors, nor why governments that rely heavily on expert committees frequently struggle to achieve comparable innovation outcomes. The repeated call to “accept failure” can therefore oversimplify the problem. Many researchers are not avoiding risk because they fear failure. Rather, the structure of academic incentives often encourages work that is theoretically elegant and readily publishable rather than work that addresses long-term, system-oriented technological challenges. As a result, res...

Considerations for Ensuring the Economic Feasibility of Medical AI Research

Before starting this blog, I should note that I am a retired professor and therefore inevitably carry certain biases, as is often the case for academics with limited direct experience in industry. When people in academia talk about the development of medical AI, the discussion often drifts toward higher resolution, more accurate diagnosis, and fully automatic or end-to-end autonomous models. This tendency is understandable. Academic incentives reward measurable performance improvements, benchmark dominance, and methodological elegance. However, this perspective quietly overlooks the force that ultimately determines whether a technology survives outside the laboratory: economics. Healthcare systems do not evolve in ideal conditions. They evolve under demographic pressure, workforce shortages, rising capital and maintenance costs, and reimbursement systems that lag far behind technological ambition. Aging populations increase demand precisely when the number of available specialists decl...

The Cost of Protection with Slowed Circulation: Long-Term Vitality Traded for Short-Term Stability

A common pattern is emerging across multiple institutional sectors, including universities and research institutions. Policymakers and administrators are increasingly debating how to retain the valuable skills of senior talent approaching retirement. In the short term, such protective measures are effective: they enhance stability, preserve accumulated experience, and delay the loss of expertise. Over time, however, less visible costs accumulate. Talent turnover declines, entry pathways for younger scholars narrow, innovation slows, and institutions gradually trade long-term vitality for short-term stability. The current debate surrounding the role of distinguished professors over the age of 65 exemplifies this broader structural problem. It is often framed as an ethical dispute or an issue of age discrimination. In reality, it is neither. At its core, this is a question of system design—how a national research ecosystem balances protection with circulation. One point must be stated cl...

Comparison of Contemporary Large Language Models

This blog presents a concise structural comparison of five prominent large language models: GPT , Claude , Gemini , LLaMA , and xAI . Although all are built on  Transformer -based foundations, they differ markedly in mathematical design, alignment strategy, training dynamics, and multimodal architecture. GPT (OpenAI) follows a scaling-law paradigm using a Transformer backbone enhanced by  s parse Mixture-of-Experts  layers. Claude (Anthropic) preserves the same basic architecture but introduces  Constitutional  AI, an alignment method that incorporates explicit behavioral constraints. Gemini (Google) adopts a unified  multimodal  Transformer that represents text, images, audio, and video within a single token sequence.   LLaMA ( Meta AI ) emphasizes dense (non-MoE) Transformer scaling and data efficiency, prioritizing compute-optimal training and architectural simplicity. xAI's Grok retains the Transformer form but is trained on a non-stationar...

Effective PDE Coefficients for Electrical Tissue Property Imaging

Image
In this blog, I discuss how the effective (or homgenized) coefficient of the elliptic partial differential equation \(\sum_{i=1}^3 \partial_i \big( a_{ij}\,\partial_j u \big) = 0\) in a body arises in the context of electrical tissue property imaging, where \(u\) denotes the electrical potential. In brief, bioimpedance is directly linked to this coefficient, and several companies , such as  InBody  and  Sciospec , are actively developing bioimpedance-based devices. This blog is based on the book *Electromagnetic Tissue Properties MRI* (Imperial College Press) written by Jin Keun Seo, Eung Je Woo, Ulrich Katscher, and Yi Wang.  The mathematical model for electrical tissue property imaging is derived from an appropriate reduction of Maxwell’s equations. In the time-harmonic regime, the electric field \( \mathbf{E} \), current density \( \mathbf{J} \), magnetic field \( \mathbf{H} \), and magnetic flux density \( \mathbf{B} \) satisfy the following relations: ...