In the week before the midterm,
we will introduce some concepts in algorithm
analysis, which is a field in computer science that deals with determining the efficiency
of algorithms. Efficiency is not
determined by the time an algorithm takes to accomplish its task; factors such
as the quality of computer hardware come into play and they are independent of algorithm
efficiency. Instead, efficiency is
determined by an algorithm’s memory usage and time complexity. For
simplicity, we will only look at the latter.
An algorithm’s time complexity is the function of its input and number
of steps it takes to accomplish its task. We use a notation known as the Big-O to classify algorithms by their time complexity. For example, an algorithm whose number of
steps increases linearly as the input increases has a time complexity of O(n)
while an algorithm whose number of steps increases quadratically as the input
increases has a time complexity of O(n ** 2).
We can list the most well-known time complexity classes in order of
decreasing efficiency:
O(1), O(log n), O(n), O(n log n), O(n ** 2), O(2 ** n), O(n!)
Although there exist algorithms that
have the same time complexity for every input, most algorithms become more
efficient for certain inputs while others become less efficient. The time complexity for an algorithm at its maximum
efficiency is its best-case complexity,
at its average efficiency is its average-case
complexity and at its minimum efficiency is its worst-case complexity. With these tools, we can perform basic analysis of the time complexities of algorithms.
No comments:
Post a Comment