Understanding the basic of algorithm

OVERVIEW

Computer algorithm have changed dramatically for the past century and algorithm is a fundamental skills for most programmers who intend to develop a system that is scalable and performant.

Algorithm become really useful when dealing with large amounts of data or solving complex problems.

On top of that, algorithm are usually asked during job interviews. So, it’s better to be safe than be sorry.

Big-O Notation

This is a mathematical model used to measure and classify algorithm. And it’s also known to estimate the time efficiency of running an algorithm as function of the input size. With the understanding of time efficiency of each algorithm, we able to determine how well our apps will work in terms of speed and performance.

There are several way to determine the time efficiency and they are divided into different categories.

1. Constant Time – O(1) 

This is the best. With constant time, it describe an algorithm that will execute the same amount of time regardless of the size of input. In other word, the run time of algorithm is the same amount of time whether it operates on one or on several thousand or million entries. Example includes looking up element of an array by its index or pushing & popping from Stack.

let numbers = [1, 2, 3, 4, 5, 6]
numbers[0]
numbers[2]

2. Linear Time – O(n)

Good performance. With linear time, it describes the algorithm in which the performance grow linearly and in direct proportion to the size of the input data set. In other word, the run of algorithm time complexity will increase at the same rate as the input dataset grows. For example, if it takes 1 second to run 1000 entries, it will take ten times to run 10,000 entries which is going to be 10 seconds. Example includes sequential search.

let numbers = [1, 2, 3, 4, 5]
for i in 0...numbers.count{
    print(numbers[i])
}

3. Quadratic Time – O(n^2)

Kinda slow. With quadratic time, it describe the algorithm in which performance is directly proportional to the square of the size of the input data set. According to the graph above, we see that the runtime increases sharply, faster than the input sizes and this is due to the result of a nested operations on a dataset. For example, if it takes 100 milliseconds for 100 entries, then with 2000 entries, it will take about 40 seconds to run and with 4000 entries, it will take 26 minutes to run. You see the significance jump in the time run? Example includes insertion sort.

for i in 1...5 {
    for j in 1...i{
    }
}

4. Logarithmic Time – O(log n)

Pretty Great. With logarithmic time, it is a highly efficient algorithm as it has shown its worth that while the data size goes up exponentially, the time goes up linearly. For example, if it takes 1 millisecond to compute 10 entries, it will take 2 milliseconds to computer 100 elements and 3 miliseconds to computer 1000 elements. Binary search, quick sort and divide and conquer type of algorithm usually run on logarithm time. Example includes binary search.

var j = 1
while j < n {
  // do constant time stuff
  j *= 2
}

Understanding the one we need

Big-O Name Description
O(1) constant This is the best The algorithm always takes the same amount of time, regardless of how much data is there. Example: looking up an element of an array by its index. Example: looking up an element of an array by its index.
O(log n) logarithmic Pretty great. These kinds of algorithm halve the amount of data with each iteration. If you have 100 items, it takes about 7 steps to find the answer. With 1,000 items, it takes 10 steps. This is super fast even for large amounts of data. And 1,000,000 items only take 20 steps. This is super fast even for large amounts of data. Example: binary search.
O(n) linear Good performanace. If you have 100 items, this does 100 unites of work. Doubling the number of items make the algorith take exactly twice as long. Example: sequential search.
O(n log n) linearithmic Decent performance. This is slightly worse than linear but not too bad. Example: the fastest general-purpose sorting algorithms.
O(n^2) quadratic Kinda slow. If you have 100 items, this does 100^2 = 10,000 units of work. Doubling the number of items makes it four times slower (because 2 squared equals 4). Example: algorithms using nested loops, such as insertion sort.
O(n^3) cubic Poor performance. If you have 100 items, this does 100^3 = 1,000,000 units of work. Doubling the input size makes it eight times slower. Example: matrix multiplication.
O(2^n) exponential Very poor performance. You want to avoid these kinds of algorithms, but sometimes you have no choice. Adding just one bit to the input doubles the running time. Example: traveling salesperson problem.
O(n!) factorial Intolerably slow. It literally takes a million years to do anything.

Table data: Swift Algorithm Club

  • Article By :
    Founder of DaddyCoding. Studied Computer Science, Information System and Information Technology at BYU-Hawaii. Currently spending most of my time researching and learning on helping to expose making iOS apps.

Random Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

*
*