Friday, February 4, 2011

Empirical Analysis, Analysis of Algorithm, and Big-Oh Notation

EMPIRICAL ANALYSIS

                When you try to look for the meaning of Empirical Analysis in the web, there would be no specific meaning for that. What I’m about to do in order to determine and explain what Empirical Analysis is really all about, let’s first define separately the two words.

Empirical: according to Wikipedia.com, the word empirical implies information gathered through experiments, observations, and even experiences. Information that are observable through our senses and has evidences or consequences.  (http://en.wikipedia.org/wiki/Empirical)

Analysis: it is a process of breaking complex topic or substance into smaller parts to acquire better understanding of it. This is mostly applied in the fields of Mathematics and Logic. (http://en.wikipedia.org/wiki/Analysis)

                From the above definitions, we can say that Empirical Analysis is an analysis based on observations, experiments, and experiences, going deeper to the information from experiments to acquire the results.

                According to www.cameron.edu on their given slide, Mathematical Analysis of some algorithms that are maybe known or even very simple algorithms is difficult. Thus, the alternative way in dealing with this matter is by Empirical Analysis and by following these steps:

1. Decide on the basic operations.
2. Decide on the input sample (range, size...).
3. Convert the algorithm to a program.
4. Generate a sample of inputs.
5. Run the program; record the observed data, and
6. Analyze the data.

-----------------------------------oOo------------------------------------


ANALYSIS OF ALGORITHM

                To analyze an algorithm is to determine the amount of resources (i.e. time and storage) necessary to execute it. The efficiency or running time of an algorithm stated as a function relating the input length to the number of steps or storage location (basing on time complexity or space complexity).

                One way of understanding more this topic is to know the reasons why we need to perform this kind of analysis. Here are some of the reasons why we need to perform Analysis of Algorithm:


  • Comparison: determining which of the algorithms are most likely appropriate to use or more efficient to deal with.
  • Running time: determining which of the algorithms are likely to handle better running time depending on the desired outputs.
  • Predict performance in new environment, and
  • Set values of algorithm parameters.


-----------------------------------oOo------------------------------------


BIG-OH NOTATION



Big-Oh Notation is a method of expressing the upper bound of an algorithm’s running time. It is a measure of the longest amount of time it could possibly take for the algorithm to complete. It is used to describe the performance or complexity of an algorithm, it is used to describe the execution time required or the space used; for example in memory or on disk, by an algorithm.

            
Big-Oh Notation explains the question, “Why size matters?” By having this typical table of cases, we can see how many operations would be performed from various values of N.

it was mentioned above that there are various values of N in representing the Big-Oh Notation, let us define each values and know the different functions.

The time efficiency of almost all of the algorithms we have discussed can be characterized by only a few growth rate functions:

O(I) - constant time: 
         This means that the algorithm requires the same fixed number of steps 
regardless of the size of the task.
                  Example: Push and Pop operations for stack.

O(N) - linear time
       This means that the algorithm requires a number of steps proportional to the 
size of the task.
                  Example: Calculating iteratively n-factorial; finding iteratively the Nth
                                 Fibonacci number.

O(N^2) - quadratic time 

       The number of operations is proportional to the size of the task squared.


                 Example: Comparing two two-dimensional arrays of size n by n.

O(log N) - logarithmic time
       Program gets only slightly slower as N grows (typically by using some transformation that progressively cuts down the size of the problem).
                 Example: Binary search in a sorted list of n element.

O(N log N) - "N log N" time
       Applicable in "divide and conquer" algorithm, where the problem is solved by breaking it up into smaller subproblems, solving them independently, then combining the solutions.
                 Example: sorting algorithms like Mergesort and Quicksort.


-----------------------------------oOo------------------------------------



other links visited:





-----------------------------------oOo------------------------------------




No comments:

Post a Comment