Pioneers Linking Math and Computer Science Win the Abel Prize
When Avi Wigderson and László Lovász began their careers in the 1970s, theoretical computer science and pure mathematics were almost entirely separate disciplines. Today, they’ve grown so close it’s hard to find the line between them. For their many fundamental contributions to both fields, and for their work drawing them together, today Lovász and Wigderson were awarded the Abel Prize, an honor given out by the Norwegian Academy of Science and Letters and regarded as one of the highest honors in mathematics.
“In many ways their work is complementary. Avi is on the computer science side and Lovász is on the mathematics side, but a lot of the issues they work on are related,” said Russell Impagliazzo, a computer scientist at the University of California, San Diego who has collaborated with both researchers.
The opportunity for this pairing to occur at all was a result of the unique period in scientific history in which they both grew up.
Wigderson was born in Haifa, Israel, in 1956. By the time he was a teenager, computer scientists were just beginning to sketch a basic theoretical framework that would end up absorbing much of his professional life. That framework, called complexity theory, involves classifying computational problems based on how hard they are for algorithms to solve. The primary measure of difficulty is the number of computational steps, with the most basic distinction being “easy” versus “hard.”
An example of an easy computational problem is multiplying two numbers together. No matter how large the numbers become, computers can find their product quickly. This problem is in the complexity class called “P,” which contains all easy-to-solve computational problems.
By contrast, finding the prime factors of a number is an example of a computational problem that seems to be hard. There’s no known algorithm that can do it quickly for all numbers. But if someone hands you the prime factors of a number, it’s easy to verify that they are indeed the correct factors by multiplying them together. This problem is in the complexity class called “NP,” which contains computational problems that might be hard to solve but whose answers are easy to verify.
In the early 1970s computer scientists formulated the guiding conjecture in the field of computational complexity, asking whether the list of problems in P corresponds exactly to the problems in NP.
This question was still fresh in 1977 when Wigderson entered the Technion, the Israel Institute of Technology. In the coming decades he would make many foundational contributions to complexity theory — helping to elaborate which problems fall into which complexity classes under which circumstances.
“When I started graduate school, computational complexity was developing into a mature field. I myself developed with it,” said Wigderson.
In the late 1980s, Wigderson and his collaborator, Ran Raz, studied the computational complexity of the “perfect matching” problem (a question that features in Lovász’s work as well). Suppose you’ve got 20 machines, each of which is capable of performing some, but not all, of 20 different tasks. The perfect matching problem asks if you can assign machines to tasks so that all tasks are covered and each machine performs exactly one.
Wigderson and Raz studied the problem, with certain restrictions added: They imagined that the computer circuit working on the problem is capable of most standard computational operations (like “and” and “or”), but that it can’t perform a crucial one: the “not” operation.
Of course, computer scientists would most love to prove, without qualification, that a computational question is hard. But so far they have failed to do so (otherwise we’d know whether P equals NP). So instead they try to prove that there’s no fast algorithm for a problem like matching when you limit the computational resources as well as the time available to solve it.
“You want to find the limitations of algorithms, and if you can’t do it in the most general setting, you restrict them, you tie one arm behind their backs,” said Wigderson. In 1990 he and Raz proved that there is no good way of using many computers in parallel to solve the matching problem in circuits without the “not” operation.
Around the same time, Wigderson worked on a central question in complexity, about how randomness changes the speed at which you can solve computational problems. Since the 1970s, computer scientists had suspected that randomness conveys an advantage. They’d found that if you allow an algorithm to essentially flip coins during its decision process, it can reach a solution much faster for some problems — like testing whether a number is prime — than if you require it to deterministically choose each step.
“It was the age when it seemed that you can do much more with randomness than you can do without it,” said Wigderson.
But in two papers published in the 1990s, Wigderson and his collaborators proved that under certain assumptions, it’s always possible to convert a fast random algorithm into a fast deterministic algorithm. The result established that the complexity class known as “BPP” was exactly equal to the complexity class P. It tied decades of work on randomized algorithms neatly into the main body of complexity theory and changed the way computer scientists looked at random algorithms.
“I think that today almost anybody you asked would tell you randomness is weak rather than that randomness is powerful, because, under assumptions we strongly believe, randomness can be eliminated,” said Wigderson.
Wigderson, who has been at the Institute for Advanced Study since 1999, has generated a long list of other results in complexity theory, including a technique called the zig-zag product that connects directly to several areas of pure mathematics and provides a strategy for escaping from a maze while only keeping track of a fixed number of intersections. The breadth of Wigderson’s work reflects the way the field of computational complexity has expanded since he entered it.
“Avi is a very central figure in the field,” said Impagliazzo. “He’s one of the people who holds it together as a consistent field and has a clear view as we’ve expanded.”
At the same time Wigderson was expanding the frontiers of complexity theory, Lovász was at work in a closely related field that also had a lot of room to grow.
Born in Budapest in 1948, Lovász was a mathematics star from an early age. As a teenager he earned three gold medals at the International Math Olympiad and also triumphed in a Hungarian game show that put math prodigies inside glass isolation booths and challenged them to solve problems.
Also early in his life, Lovász met the influential Hungarian mathematician Paul Erdős, who helped introduce him to the field of graph theory. At the time, graph theory was a mathematical backwater, known for posing fun problems like the four-color conjecture (now a proved theorem), which asks whether it’s always possible to color the countries in any map with only four colors and have no two adjacent countries with the same color.
“I wouldn’t call it obscure, but certainly graph theory was not mainstream mathematics because many of the problems or results arose from puzzles or sort of recreational mathematics,” said Lovász, who is now at Eötvös Loránd University in Hungary. But things were already changing by the time Lovász earned his doctorate in 1970 at the age of 22 — and a main reason was the birth and rapid growth of computer science.
Computers, by necessity, work with discrete quantities — binary strings of 1s and 0s. Combinatorics is the mathematics of discrete objects, and one of its main subfields is graph theory, which studies networks of edges (lines) connecting vertices (dots). As such, it provided a kind of language for investigating emerging questions in theoretical computer science.
Lovász views the rise of computers and graph theory as a propitious historical concurrence, on par with the way analysis (an advanced form of calculus) came of age more than a century earlier through the investigation of urgent physical questions.
“I sometimes use the analogy to analysis and physics in the 18th and 19th centuries, where they sort of grew hand by hand,” said Lovász. “There’s something similar that happened in graph theory and computer science.”
Much of Lovász’s work has centered around the development of algorithms for solving different kinds of problems. One of his most influential results was the LLL algorithm, named for its creators, Lovász and the brothers Arjen and Hendrik Lenstra. The algorithm works on geometric objects called lattices, which are sets of points in space whose coordinates often have integer values. The LLL algorithm addresses a basic question about their properties: Which point in the lattice is closest to the origin? It’s a simple question that’s often hard to solve, especially in higher-dimensional spaces and when the points in the lattice create a distorted shape.
Instead of answering the question exactly, the LLL algorithm finds a good approximation, identifying a point and providing assurance that there is no other point that’s much closer to the origin. Due to the wide applicability of this geometric model, the ability to find this point has implications in a wide range of settings, from factoring polynomials to testing the security of cryptographic systems.
“It’s one of the fundamental algorithms. It’s important both theoretically and it’s used for many practical purposes,” said Gil Kalai of IDC Herzliya and the Hebrew University of Jerusalem, who was previously a member of the Abel Prize committee.
Another of Lovász’s most significant contributions has a probabilistic flavor to it. In the 1960s Paul Erdős developed what came to be called the probabilistic method for answering questions about graphs. Often mathematicians want to know whether a graph with certain properties exists. One way to answer these questions is to actually find a graph that does. But Erdős realized that another approach was merely to prove that a graph chosen at random will have this property with high probability.
Unfortunately, Erdős’ probabilistic method worked best at establishing the existence of graphs that are already common. In the 1970s, Lovász collaborated with Erdős to devise a complementary technique, called the Lovász local lemma, that works for proving the existence of graphs that are very rare. It’s since become one of the staple techniques in the field.
“This is a tool that was used hundreds, thousands of times in subsequent proofs,” said Kalai.
Lovász has solved many other problems in graph theory over the course of his career, including Kneser’s conjecture, about the minimum number of colors needed for coloring a certain graph, and a question about the conditions that guarantee perfect matchings and related structures in graphs. He also generated several conjectures of his own that are still guiding the field of graph theory today. These include two problems, the KLS conjecture and the EFL conjecture, that saw big results only within the last few months.
During the same years that pioneers like Wigderson were refining our understanding of computational complexity, Lovász was answering questions about graphs that helped define the borders between one complexity class and another.
“These complexity concepts were manifested by very simple questions about graphs,” said Kalai. “So questions about graphs became very prominent, and Lovász studied these from the beginning.”
And so it’s fitting that two pioneers who brought their respective fields together are now joined in another way, by the Abel Prize.
The post Pioneers Linking Math and Computer Science Win the Abel Prize first appeared on Quanta Magazine.