Research
The structured documentation and analysis provided by Algorithm Wiki has supported research in algorithmic complexity, optimization techniques, and computational theory. The platform's systematic approach to algorithm classification and relationship mapping contributes to the broader understanding of theoretical computer science.
Publications

How fast do algorithms improve?
Authors: Sherry, Yash, and Neil C. Thompson
Algorithms determine which calculations computers use to solve problems and are one of the central pillars of computer science. As algorithms improve, they enable scientists to tackle larger problems and explore new domains and new scientific techniques [1], [2]. Bold claims have been made about the pace of algorithmic progress. For example, the President’s Council of Advisors on Science and Technology (PCAST), a body of senior scientists that advise the U.S. President, wrote in 2010 that “in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed” [3]. However, this conclusion was supported based on data from progress in linear solvers [4], which is just a single example. With no guarantee that linear solvers are representative of algorithms in general, it is unclear how broadly conclusions, such as PCAST’s, should be interpreted.

Building the algorithm commons: Who discovered the algorithms that underpin computing in the modern enterprise?
Authors: Thompson, Neil C., Shuning Ge, and Yash M. Sherry
The reach of the modern enterprise relies on the power of information technology (IT) tools such as sensors, databases, and machine learning. But tool improvements must be fueled by increased computing power (e.g., faster hardware) or getting more productivity from existing systems (e.g., through better computer algorithms). New research has uncovered that this second source, algorithm progress, is more important than previously realized—sometimes orders of magnitude more important than hardware—and thus could be an important technological stepping-stone to give competitive advantage to a country's firms. Analyzing this “Algorithm Commons” reveals that the United States has been the largest contributor to algorithm progress, with universities and large private labs (e.g., IBM) leading the way, but that U.S. leadership has faded in recent decades.

A Metastudy of Algorithm Lower Bounds
Authors: Liu, Emily
Algorithms are essential to the field of computer science, and algorithm designers are always searching for the mathematically optimal algorithms. Sherry and Thompson found that improvements to algorithm upper bounds have been steadily decreasing since the 1970s. In this work we aim to discover whether this could be because researchers have already found the optimal versions of many algorithms. In order to get a better sense of the picture, we compiled lower bounds on the algorithm families studied by Sherry and Thompson. We find that, while a few problems still have large gaps between upper and lower bounds where improvement is possible, over three-quarters of these problems are already very close to being optimal! The “slowing progress” may in fact prove to be a triumph in disguise, as it is an indicator that many problems have achieved optimal solutions.

The Space Race: Progress in Algorithm Space Complexity
Authors: Rome, Hayden
This paper presents the frst broad survey of the space complexities of algorithms for important problems in computer science, analyzing more than 800 algorithms for diferent problem families, and comparing the diferent algorithms for each of these problem families. The survey reveals the increasing importance of space complexity in recent years and discusses its relationship with time complexity. Our fndings reveal an increasing trend in the percentage of algorithm papers that include space complexity analysis. We identify an increasing trend in the percentage of problem families with asymptotic time-space tradeofs. Additionally, we fnd that the few problem families that see improvements in space complexity have typically improved at rates faster than the improvement rates of DRAM access speed and DRAM capacity. Under the right conditions, these algorithmic improvements to space complexity can be much more important than hardware improvements when considering computational speedups related to data accesses. This study sheds light on the space complexity of algorithms and contributes to a better understanding of the relationship between time and space complexities. We have also uploaded the space complexity work for this paper to our website, The Algorithm Wiki1, to serve as a useful resource for theorists and practitioners alike.

Progress in Parallel Algorithms
Authors: Tontici, Damian
Parallel computing offers the promise of increased performance over sequential computing, and parallel algorithms are one of its key components. There has been no aggregated or generalized comparative analysis of parallel algorithms. In this thesis, we investigate this field as a whole. We aim to understand the trends in algorithmic progress, improvement patterns, and the importance and interactions of various commonly used metrics. We collect parallel algorithms solving problems in our set and analyze them. We look at four major themes: how parallel algorithms have progressed, including in relationship to sequential algorithms and parallel hardware; how the work and span of algorithms influence performance; how problem size and available parallelism affect performance; and what researchers’ observable priorities look like. We find that more problems have had parallel improvements than sequential ones since the ’80s, that most parallel algorithms don’t improve algorithmic complexities, and much more. This research is important for us to understand how the field of parallel algorithms has changed throughout time, and what it looks like now.

On Algorithmic Progress in Data Structures and Approximation Algorithms
Authors: Li, Jeffrey
In the big data regime, computer systems and algorithms must process large amounts of data, making many traditional exact algorithms too costly to run. To work around this, researchers have developed approximation algorithms, which trade off some accuracy for asymptotic improvements in runtime, and data structures, which can efficiently store and answer multiple queries about a dataset. This naturally leads to the question, how have approximation algorithms and data structures improved over the years? Here, we provide some insight into this question, looking into trends in algorithmic and data structure progress, tradeoffs between speed and accuracy or between runtimes of specific data structure operations, and specific problems of interest. Our analysis is based on a dataset of around 300 approximation algorithms and around 250 data structures. For both fields, we find that research is still fairly active even to the present day, even though significant or asymptotic gains for data structures have been slowly on the decline. Improvements have also been fairly heterogeneous – some problems see a lot of work and improvements put into them, while others have not seen as much progress. In addition, of the problems that have both exact and approximation algorithms, around 1/6 of the problems have seen approximation algorithms have immensely large average yearly improvement rates compared to exact algorithms, while around 1/2 of the problems have seen approximation algorithms have minimal improvement over exact algorithms. For data structures, we find that only 4 out of the 28 abstract data types in our dataset have ever had a tradeoff between storage requirements and/or runtimes of specific operations, with only 2 still existing in the present, suggesting that improvements generally build off of each other without increasing space usage or time required for other operations. This research helps us understand how approximation algorithms and data structures have progressed through the years and how they are now.

How close are algorithms to being optimal?
Authors: Liu, Emily, William Kuszmaul, Yash Sherry, Jayson Lynch, and Neil C. Thompson
Working Paper

How fast are algorithms reducing the demands on memory? A survey of progress in space complexity
Authors: Rome, Hayden, Jayson Lynch, Jeffrey Li, Chirag Falor, and Neil Thompson
Working Paper