Why Adding a Full Hard Drive Can Make a Computer More Powerful

by Alan North
0 comments


Those are pretty stringent constraints, so it wasn’t obvious that the extra memory could ever prove useful. But to their surprise, Buhrman and Cleve showed that if you tweak bits in just the right way, you really can get extra computational oomph out of a full memory.

“That was a shocker for everyone,” said Loff, who was a graduate student in Buhrman’s group at the time, working on the memory question with his fellow student Florian Speelman. The team soon extended the result to an even larger class of problems, and published their combined results in 2014.

They named the new framework catalytic computing, borrowing a term from chemistry. “Without the catalyst, the reaction would not have proceeded,” said Raghunath Tewari, a complexity theorist at the Indian Institute of Technology, Kanpur. “But the catalyst itself remains unchanged.”

Not Far From the Tree

A small band of researchers continued to develop catalytic computing further, but no one even tried to apply it to the tree evaluation problem that had initially inspired Koucký’s quest. For that problem, the remaining open question was whether a small amount of memory could be used for storage and computation simultaneously. But the techniques of catalytic computing relied on the extra, full memory being very large. Shrink that memory and the techniques no longer work.

Still, one young researcher couldn’t help wondering whether there was a way to adapt those techniques to reuse memory in a tree evaluation algorithm. His name was James Cook, and for him the tree evaluation problem was personal: Stephen Cook, the legendary complexity theorist who invented it, is his father. James had even worked on it in graduate school, though he mostly focused on completely unrelated subjects. By the time he encountered the original catalytic computing paper in 2014, James was about to graduate and leave academia for software engineering. But even as he settled into his new job, he kept thinking about catalytic computing.

“I had to understand it and see what could be done,” he said.

For years, James Cook tinkered with a catalytic approach to the tree evaluation problem in his spare time. He gave a talk about his progress at a 2019 symposium in honor of his father’s groundbreaking work in complexity theory. After the talk, he was approached by a graduate student named Ian Mertz, who’d fallen in love with catalytic computing five years earlier after learning about it as an impressionable young undergrad.

“It was like a baby bird imprinting scenario,” Mertz said.

Clothing Coat Jacket Adult Person Plant Tree Face Head Photography and Portrait

James Cook and Ian Mertz adapted catalytic computing techniques to design a low-memory algorithm for the tree evaluation problem.

Photograph: Colin Morris/Quanta Magazine

Clothing TShirt Face Head Person Photography Portrait Rock and Adventure

Photograph: Stefan Grosser/Quanta Magazine

Cook and Mertz joined forces, and their efforts soon paid off. In 2020, they devised an algorithm that solved the tree evaluation problem with less memory than a necessary minimum conjectured by the elder Cook and McKenzie—though it was just barely below that threshold. Still, that was enough to collect on the $100 bet; conveniently for the Cooks, half of it stayed in the family.

But there was still work to do. Researchers had started studying tree evaluation because it seemed as if it might finally provide an example of a problem in P that’s not in L—in other words, a relatively easy problem that can’t be solved using very little memory. Cook and Mertz’s new method used less memory than any other tree evaluation algorithm, but it still used significantly more than any algorithm for a problem in L. Tree evaluation was down, but not out.

In 2023, Cook and Mertz came out with an improved algorithm that used much less memory—barely more than the maximum allowed for problems in L. Many researchers now suspect that tree evaluation is in L after all, and that a proof is only a matter of time. Complexity theorists may need a different approach to the P versus L problem.

Meanwhile, Cook and Mertz’s results have galvanized interest in catalytic computing, with new works exploring connections to randomness and the effects of allowing a few mistakes in resetting the full memory to its original state.

“We’ve not finished exploring what we can do with these new techniques,” McKenzie said. “We can expect even more surprises.”


Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.



Source link

Related Posts

Leave a Comment