+ "text": "Measuring Efficiency\nOnce we have established that an algorithm is correct, we must ask how much it “costs” to run. In this book, we care about two primary resources: Time (the number of operations performed) and Space (the amount of memory required).\nHowever, hardware changes so rapidly that it is rarely useful to talk about performance in terms of seconds or megabytes. To remain hardware-agnostic, we use an idealized computational model.\nIn this book, and in most algorithmic analysis, we utilize the Random Access Machine (RAM) model. This model provides a controlled environment where we can precisely describe the number of steps an algorithm takes by assuming a unitary unit of cost for basic operations.\nIn the RAM model, we assume:\n\nUnitary Operation Cost: Basic operations—such as arithmetic, variable assignment, and method calls—all cost exactly one unit of time.\nDiscrete Memory Cells: Memory is divided into discrete cells, each capable of holding one unit of data (such as a number, a character, or sometimes a small string).\nConstant Access Time: We can access any memory cell directly with a unitary cost. This is the “Random Access” from which the model takes its name; we can jump to any random location in memory without paying a penalty for distance.\n\nOf course, this is an abstraction. In a real computer, multiplication is more expensive than addition, floating-point numbers carry additional costs, and memory is structured into complex layers of cache. However, the RAM model works exceptionally well for comparing algorithms in the abstract because it glosses over details that are often unimportant in the grand scheme of complexity. It only begins to break down in specialized areas, such as numerical algorithms, where the exact cost of multiplications versus additions or the precise layout of numbers in memory becomes critical to performance.\nFurthermore, we rarely care about the absolute number of steps. Knowing that a specific sort takes exactly 1,024 operations is less useful than knowing how that cost grows as the input size \\(n\\) increases.\nThe core of algorithmic analysis is scaling. For example:\n\nConstant Cost (\\(O(1)\\)): If you have a list of items and you want to access the 42nd element, that operation has a unitary cost of 1. It does not matter if the list has 1,000 items, 1 million items, or 1 billion items; the effort required to jump to that specific index remains the same.\nLinear Cost (\\(O(n)\\)): If you want to count every item in that list, you must visit each one. If the size of the input doubles, the cost of the operation doubles. If you have 1,000 items, the cost is 1,000; if you have 1 million, the cost is 1 million.\n\nTo formalize these scaling patterns, we use asymptotic notation, a terminology borrowed from mathematical analysis. This allows us to categorize algorithms into growth classes:\n\n\\(O(1)\\) - Constant Time: The cost is independent of the input size.\n\\(O(n)\\) - Linear Time: The cost grows in direct proportion to the input size.\n\\(O(n^2)\\) - Quadratic Time: The cost grows with the square of the input size, often seen in algorithms with nested loops.\n\nBy focusing on these growth rates, we can determine the “efficiency ceiling” of our solutions and decide whether we have found the optimal approach for a given problem.",
0 commit comments