
Researchers in logic and computer science have revived an approach from metamathematics, that is, reverse mathematics, to probe why certain problems remain stubbornly hard, tells Quanta Magazine. Instead of starting from fixed axioms and deriving theorems, reverse mathematics flips the process: it replaces one of the axioms with a candidate theorem, then checks whether that “theorem-axiom” implies back the original axioms.
This reversal matters because it reveals that many theorems that looked unrelated, often from different corners of computational complexity, are in fact logically equivalent under foundational assumptions. That equivalence suggests “hard problems” might share a common, deeper structural reason for their difficulty.
As an example, researchers exploring classical hardness, such as the famously intractable Travelling Salesperson Problem (TSP), use this method to show that proving TSP’s hardness is as hard as proving certain other complexity-theoretic statements. What that means: the struggle to prove lower bounds or hardness might not be because the problems are so different, but because we’re hitting the same limitation over and over, just disguised under different formulations.
This insight could reframe decades of effort to show that some computational tasks cannot be solved efficiently. Instead of tackling each problem in isolation, mathematicians and computer scientists might identify core structural barriers, perhaps even proving that a broad class of problems is hard for the same underlying reason.
Reverse mathematics offers a lens on the limits of proof itself. It suggests that many of the world’s most persistent “why-can’t-we-solve-this” questions in computation may trace back to a shared logical core, not to independent quirks of each problem.