Menu
Home
Log in / Register
 
Home arrow Computer Science arrow Data Structures and Algorithms with Python
< Prev   CONTENTS   Next >

2.4 Big-Oh Notation

Whichever line we look at in the experimental data, the access time never exceeds 100 µs for any of the memory accesses, even with the other things the computer might be doing. We are safe concluding that accessing memory takes less than 100 µs. In fact, 100 µs is much more time than is needed to access or store a value in a memory location. Our experimental data backs up the two claims we made earlier. However, technically, it does not prove our claim that accessing memory takes a constant amount of time. The architecture of the RAM in a computer could be examined to prove that accessing any memory location takes a constant amount of time. Accessing memory is just like calling out a name in a group of people and having that person respond with the value they were assigned. It doesn't matter which person's name is called out. The response time will be the same, or nearly the same. The actual time to access the RAM of a computer may vary a little bit if a cache is available, but at least we can say that there is an upper bound to how much time accessing a memory location will take.

This idea of an upper bound can be stated more formally. The formal statement of an upper bound is called Big-Oh notation. The Big-Oh refers to the Greek letter Omicron which is typically used when talking about upper bounds. As computer programmers, our number one concern is how our programs will perform when we have large amounts of data. In terms of the memory of a computer, we wanted to know how our program would perform if we have a very large list of elements. We found that all elements of a list are accessed in the same amount of time independent of how big this list is. Let's represent the size of the list by a variable called n. Let the average access time for accessing an element of a list of size n be given by f (n). Now we can state the following.

O(g(n)) ={ f | ∃d > 0, n0 ∈ Z + 3 0 ≤ f (n)d g(n),nn0}

In English this reads as follows: The class of functions designated by O( g(n)) consists of all functions f, where there exists a d greater than 0 and an n0 (a positive integer) such that 0 is less than or equal to f(n) is less than or equal to d times g(n) for all n greater than or equal to n0.

If f is an element of O( g(n)), we say that f(n) is O( g(n)). The function g is called

an asymptotic upper bound for f in this case. You may not be comfortable with the mathematical description above. Stated in English the set named O( g(n)) consists of the set of all functions, f(n), that have an upper bound of d* g(n), as n approaches infinity. This is the meaning of the word asymptotic. The idea of an asymptotic bound means that for some small values of n the value of f(n) might be bigger than the value of d* g(n), but once n gets big enough (i.e. bigger than n0), then for all bigger n it will always be true that f(n) is less than d* g(n). This idea of an asymptotic upper bound is pictured in Fig. 2.3. For some smaller values the function's performance, shown in green, may be worse than the blue upper bound line, but eventually the upper bound is bigger for all larger values of n.

We have seen that the average time to access an element in a list is constant and does not depend on the list size. In the example in Fig. 2.2, the list size is the n in the definition and the average time to access an element in a list of size n is the f(n).

Fig. 2.3 An Upper Bound

Because the time to access an element does not depend on n, we can pick g(n) = 1. So, we say that the average time to access an element in a list of size n is O(1). If we assume it never takes longer than 100 µs to access an element of a list in Python, then a good choice for d would be 100. According to the definition above then it

must be the case that f(n) is less than or equal to 100 once n gets big enough.

The choice of g(n) = 1 is arbitrary in computing the complexity of accessing an element of a list. We could have chosen g(n) = 2. If g(n) = 2 were chosen, d might be chosen to be 50 instead of 100. But, since we are only concerned with the overall

growth in the function g, the choice of 1 or 2 is irrelevant and the simplest function is chosen, in this case O(1). In English, when an operation or program is O(1), we say it is a constant time operation or program. This means the operation does not depend on the size of n.

It turns out that most operations that a computer can perform are O(1). For instance, adding two numbers together is a O(1) operation. So is multiplication of two numbers. While both operations require several cycles in a computer, the total number of cycles does not depend on the size of the integers or floating point numbers being added or multiplied. A cycle is simply a unit of time in a computer. Comparing two values is also a constant time operation. When computing complexity, any arithmetic calculation or comparison can be considered a constant time operation.

This idea of computational complexity is especially important when the complexity of a piece of code depends on n. In the next section we'll see some code that depends on the size of the list it is working with and how important it is that we understand the implications of how we write even a small piece of code.

 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Philosophy
Political science
Psychology
Religion
Sociology
Travel