148
UNIT1: FUNDAMENTALS OF ALGORITHM Structure 1.0 Objectives 1.1 Introduction to algorithm 1.2 Properties of algorithm 1.3 Algorithmic Notations 1.4 Design and development of an algorithm 1.5 Some simple examples 1.6 Summary 1.7 Keywords 1.8 Answers to check your progress 1.9 Unit- end exercises and answers 1. 10 Suggested readings 1.0 OBJECTIVES At the end of this unit you will be able to Fundamentals of algorithms along with notation. Te various properties of an algorithm. How to write algorithm or pseudo code for any problem. Algorithms for varieties of problems. 1.1 Introduction to algorithm: An algorithm, named for the ninth century Persian mathematician al-Khowarizmi, is simply a set of rules used to perform some calculations, either by hand or more usually on a machine Even ancient Greek has used an algorithm which is popularly known as Euclid’s algorithm for calculating the greatest common divisor(gcd) of two numbers. An algorithm is a tool for solving a given problem.Before writing a program for solving the given problem, a good 1

65616104-full-notes

Embed Size (px)

Citation preview

UNIT1: FUNDAMENTALS OF ALGORITHM Structure 1.0 Objectives 1.1 Introduction to algorithm 1.2 Properties of algorithm 1.3 Algorithmic Notations 1.4 Design and development of an algorithm 1.5 Some simple examples 1.6 Summary 1.7 Keywords 1.8 Answers to check your progress 1.9 Unit- end exercises and answers 1. 10 Suggested readings 1.0 OBJECTIVES At the end of this unit you will be able to Fundamentals of algorithms along with notation. Te various properties of an algorithm. How to write algorithm or pseudo code for any problem. Algorithms for varieties of problems.

1.1 Introduction to algorithm: An algorithm, named for the ninth century Persian mathematician al-Khowarizmi, is simply a set of rules used to perform some calculations, either by hand or more usually on a machine Even ancient Greek has used an algorithm which is popularly known as Euclids algorithm for calculating the greatest common divisor(gcd) of two numbers. An algorithm is a tool for solving a given problem.Before writing a program for solving the given problem, a good programmer first designs and writes the concerned algorithm, analyses it refines it as many times as required and arrives at the final efficient form that works well for all valid input data and solves the problem in the shortest possible time, utilizing minimum memory space. Definition of algorithm: The algorithm is defined as a collection of unambiguous instructions occurring in some specific sequence and such an algorithm should produce output for given set of input in finite amount of time.

1

The basic requirement is that the statement of the problem is to be made very clear because certain concepts may be clear to some one and it may not be to some body else. For example, calculating the roots of a quadratic equation may be clear to people who know about mathematics, but it may be unclear to some one who is not. A good algorithm is like sharp knife it does exactly what it is supposed to do with a minimum amount of applied effort. Using the wrong algorithm to solve a problem is like trying to cut a steak with a screwdriver. You obtain a result, but you would have spent more effort than necessary. Any algorithm should consist of the following: 1. Input : The range of inputs for which an algorithm works perfectly. 2. Output : The algorithm should always produce correct results and it should halt. 3. A finite sequence of the instructions that transform the give input to the desired output (Algorithm + Programming lanuage) Usually, algorithm will be written in simple English like statements along with simple mathematical expressions. The definition of Algorithm can be illustrated using the figure 1.1 Input Problem -- Algorithm -- Computer -- Output ( Fig 1.1 Notion of the Algorithm )

Any systematic method for calculating the result a can be considered as an algorithm. For example, the methods that we learn in school for adding, multiplying, dividing numbers can be considered as algorithms. By looking at the steps specified we can achieve for the result without even thinking. Even a cooking recipe can be considered as an algorithm if the steps: 1. Describe precisely how to make a certain dish. 2. Describe the exact quality to be used. 3. Detail instructions of what items to be added next at what time? How long to cook? 1.2 Properties of algorithms:

Each and every algorithm has to satisfy some properties. The various properties or the characteristics of an algorithms are: 1. Precise and unambiguous (Definiteness) :An algorithm must be simple, precise and unambiguous i.e. there should not be any ambiguity(dout) in the instructions or statements specified to solve a problem. Each and every instruction used in the algorithm must be clear and unambiguous.

2

2. Range of inputs : The range of inputs for which the algorithm produce the desired result should be specified. 3. Maintain order : The instructions in each and every step in an algorithms are I specified order i.e. they will be executed in sequence (i.e. one after the other). The instructions cannot be written in random order. 4. Finite and correct : They must solve the problem in certain finite number of steps and produce the appropriate result. The range of input for which the algorithm works perfectly should be specified. 5. Termination : ecac algorithm should terminate. 6. Several algorithms may exist for solving a given problem and execution speed of each algorithm may be different. (for example, to sort various algorithms bubble sort, insertion sort can be used). 7. An algorithm can be represented in several different ways. 8. Algorithm for a given problem can be based on very different ides (for example, to sort several methods exist such as bubble sort, insertion sort, radix sort etc.) and may have different execution speeds. 1.3 Algorithmic Notations The following notations are used usually while writing any algorithm. 1. Write the word algorithm and write what is the main objective of the algorithm. For example, Algorithm Area_of_circle 2. Then a brief description of what is achieved using the algorithm along with the inputs to the algorithm has to be provided. For example, Description : The algorithm computes the area of circle using the input value radius 3. Each i9nstruction should be in a separate steps and the step number has to be provided. What is accomplished in each step has to be described in brief and has to be enclosed within the square brackets (which we call as comment). For example, to find the area of circle, we can write: Step2: [ Find the area of circle] Area 3.142*radius*radius. 4. After all operations are over, the algorithm has to be terminated which indicates the logical end of the algorithm. For example , the last step in the algorithm will be: Step4: [Finished] exit. 1.4 Design and development of an algorithm The fundamental steps in solving any given problem which leads to the complete development of an algorithm, are as follows:

3

1. 2. 3. 4. 5. 6. 7.

Statement of the problem. Development of a mathematical model Designing of the algorithm Implementation Analysis of the algorithm for its time and space complexity Program testing and debugging Documentation.

1. Statement of the problem. Before we attempt to solve a given problem, we must understand precisely the statement of the algorithm. There are again several ways to do this. We can list all the software specification requirements and try to ask several questions and get the answers. This would help us to understand the problem more clearly and remove any ambiguity. 2. Development of a mathematical model Having understood the problem, the next step is to look for a mathematical model, which is best suit for the given problem. This is a very important step in the overall solution process and it should be given considerable thought. In fact the choose of the model has a long way to go down in the development process. We must think of -which mathematical model is best suit for any given problem ? - Are there any model which has already been selected to solve a problem which resembles the current one? 3. Designing of the algorithm As we are comfortable with the specification and the model of the problem at this stage, we can move on to writing down an algorithm. 4. Implementation In this step appropriated data structures are to be selected and coded in a target language. The select of a target language is very important sub step to reduce complexities involved in coding. 5. Analysis of the algorithm for its time and space complexity We will use, in this section, a number of term s like complexity, analysis , efficiency, etc. All of these terms refer to the performance of a program. Our job cannot stop once we write the algorithm and code it in say C or C++ or Java. We should worry about the space3 and timing requirement too. Why? There are several reasons for this and so we shall start with the time complexity. In simple terms time complexity of a program is the amount of computer times it needs to run a program.

4

The space complexity of a program is the amount of memory needed to run a program. 6. Program testing and debugging After implementing the algorithm in a specific language, next is the time execute. After executing the program the desired output should be obtained. Testing is nothing about the verification of the program for its correctness i.e. whether the output of the program is correct or not. Using different input values, one can check whether the desired output is obtained or not. Any logical error can be identified by program testing. Usually debugging is part of testing. Many debugging tool exist by which one can test the program for its correctness. 7. Documentation Note that the documentation is not last step. The documentation should exist for understanding the problem till it is tested and debugged. During design and implementation phase the documentation is very useful. To understand the design or code, proper comments should be given. As far as possible program should be selfdocumented. So, usage of proper variable name and data structures play a very important role during documentation. It is very difficult to read and understand the others logic and code. The documentation enables the individuals to understand the programs written by the other people. 1.5 Some simple examples 1. Algorithm to find GCD of two numbers.( Euclids algorithm). ALGORITHM : gcd(m,n) //Purpose :To find the GCD of two numbers //Description : This algorithm computes the GCD of two non-negative and non-zero values accepted as parameters. //Input : Two non-negative and non-zero values m and n //Output : GCD of m and n Step1 : if n=0 return m and stop Step2: Divide m by n and assign the remainder to r. Step3: Assign the value of n to m and the value of r to n Step4: Go to step1. 2. Algorithm to find GCD of two numbers.(Consecutive integer checking method).

5

ALGORITHM : gcd(m,n) //Purpose :To find the GCD of two numbers //Description : This algorithm computes the GCD of two non-negative and non-zero values accepted as parameters. //Input : Two non-negative and non-zero values m and n //Output : GCD of m and n Step1 : [find the minimum of m and n] rmin(m,n); Step2: [find the gcd using consecutive integer checking] While(1) if (m mod r = 0 and n mod r = 0) break; end while Step 3: return r. 3. Algorithm to find GCD of two numbers.(Repetative subtraction method) ALGORITHM : gcd(m,n) //Purpose :To find the GCD of two numbers //Description : This algorithm computes the GCD of two non-negative and non-zero values accepted as parameters. //Input : Two non-negative and non-zero values m and n //Output : GCD of m and n Step1 : [If one of the two numbers is zero, return non-negative number a the GCD] if (m=0) return n; if (n=0) return m; Step2 :[Repeat step 2 while m and n are different] While (m!=n) if(m>n) mm-n; else nn-m; end if end while Step 3: [finished : return GCD as the output] return m; Note: Same problem can be solved in many ways(example Algorithm 1,2 and 3).

6

4. Algorithm to generate prime numbers using sieve Eratosthenes method. (Pseudo code) ALGORITHM SIEVE_PRIME(n) //Purpose : To generate prime numbers between 2 and n //Description : This algorithm generates prime numbers using sieve method //Input : A positive integer n>=2 //Output : Prime numbers 0 and an n0 > 0 such that | f(n) | | g(n) | for all n n0

14

Historical Note: The notation was introduced in 1892 by the German mathematician Paul Bachman.

-Notation (Lower Bound) This notation gives a lower bound for a function to within a constant factor. We write f(n) = (g(n)) if there are positive constants n0 and c such that to the right of n0, the value of f(n) always lies on or above c g(n). In the set notation, we write as follows: For a given function g(n), the set of functions (g(n)) = {f(n) : there exist positive constants c and n0 such that 0 c g(n) f(n) for all n n0} We say that the function g(n) is an asymptotic lower bound for the function f(n).

The intuition behind -notation is shown above. Example: n = (lg n), with c = 1 and n0 = 16.

1.4.1 Algorithm Analysis The complexity of an algorithm is a function g(n) that gives the upper bound of the number of operation (or running time) performed by an algorithm when the input size isn. There are two interpretations of upper bound. Worst-case Complexity The running time for any given size input will be lower than the upper bound except possibly for some values of the input where the maximum is reached.

15

Average-case Complexity The running time for any given size input will be the average number of operations over all problem instances for a given size. Because, it is quite difficult to estimate the statistical behavior of the input, most of the time we content ourselves to a worst case behavior. Most of the time, the complexity of g(n) is approximated by its family o(f(n)) where f(n) is one of the following functions. n (linear complexity), log n (logarithmic complexity), na where a 2 (polynomial complexity), an (exponential complexity).

1.4.2 Optimality Once the complexity of an algorithm has been estimated, the question arises whether this algorithm is optimal. An algorithm for a given problem is optimal if its complexity reaches the lower bound over all the algorithms solving this problem. For example, any algorithm solving the intersection of n segments problem will execute at least n2operations in the worst case even if it does nothing but print the output. This is abbreviated by saying that the problem has (n2) complexity. If one finds an O(n2) algorithm that solve this problem, it will be optimal and of complexity (n2).

1.5 Practical Complexities Computational complexity theory is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty. In this context, a computational problem is understood to be a task that is in principle amenable to being solved by a computer (which basically means that the problem can be stated by a set of mathematical instructions). Informally, a computational problem consists of problem instances and solutions to these problem instances. For example, primality testing is the problem of determining whether a given number is prime or not. The instances of this problem are natural numbers, and the solution to an instance is yes or no based on whether the number is prime or not. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do.

16

Closely related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, it tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kind of problems can, in principle, be solved algorithmically. 1.5.1 Function problems A function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem, that is, it isn't just yes or no. Notable examples include the traveling salesman problemand the integer factorization problem. It is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not really the case, since function problems can be recast as decision problems. For example, the multiplication of two integers can be expressed as the set of triples (a, b, c) such that the relation a b = c holds. Deciding whether a given triple is member of this set corresponds to solving the problem of multiplying two numbers. 1.5.2 Measuring the size of an instance To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithm requires to solve the problem. However, the running time may, in general, depend on the instance. In particular, larger instances will require more time to solve. Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as function of the size of the instance. This is usually taken to be the size of the input in bits. Complexity theory is interested in how algorithms scale with an increase in the input size. For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve a problem for a graph with 2n vertices compared to the time taken for a graph with n vertices? If the input size is n, the time taken can be expressed as a function of n. Since the time taken on different inputs of the same size can be different, the worst-case time complexity T(n) is defined to be the maximum time taken over all inputs of size n. If T(n) is a polynomial in n, then the algorithm is said to be a polynomial time algorithm. Cobham's thesis says that a problem can be solved with a feasible amount

17

of resources if it admits a polynomial time algorithm. 1.6 Performance measurement of simple algorithms 1.Find the time complexity of the following algorithms a.)Algorithm :simple for (i=0; i 4 (14528) ( 1 4 2 5 8 ), Swap since 5 > 2 (14258) ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5), algorithm does not swap them. Second Pass: (14258) (14258) 44

(14258) ( 1 2 4 5 8 ), Swap since 4 > 2 (12458) (12458) (12458) (12458) Now, the array is already sorted, but our algorithm does not know if it is completed. The algorithm needs one whole pass without anyswap to know it is sorted. Third Pass: (12458) (12458) (12458) (12458) (12458) (12458) (12458) (12458) Finally, the array is sorted, and the algorithm can terminate. Algorithm :bubbleSort( A : list of sortable items ) n = length(A) for j 1 to n-1 do for i 0 to n-j-1 do if A[i] >= A[i+1] then swap(A[i], A[i+1]) end if end for end for

Analysis : Time complexity t(n)=

n 1 n j 1 j =1 i = 0

1 = n j 1 0 + 1 = n j = (n 1) + (n 2) + ... + 3 + 2 + 1 = n(n+!) / 2j =1 j =1

n 1

n 1

So, time complexity of bubble sort = (n2 ).

Check your progress 1. Write an algorithm for sequential search and trace it for the input { 1,9,2,4,6,8}. 2. Write an algorithm for bubble sort and selection sort. Apply it for the following set of numbers { 5,8,3, 2,1,9}. Find the time complexity of all these algorithms. 3. Write an algorithm for insertion sort Apply it for the following set of numbers { 6,8,3, 6,1,9}. Find the time complexity . 1.3 SUMMARY: 45

Sorting : It is the process of arranging the elements in either ascending or descending manner. Example insertion sort, bubble sort, selection sort etc. Searching : it is a process of finding the element from the set of n elements. Example linear search, binary search. 1.9 KEYWORDS Binary search : searching an element by divide and conquer technique. 1.10 1. 1 2. 1.2.1 & 1.2.3 3. 1.2.2 1.7 UNIT-END EXERCISES AND ANSWERS 5. Apply insertion sort algorithm for the set A L G O R I T H M to sort it in ascending order. 6. Apply Binary search algorithm to search G from the set A B C F H Z X . Answers: SEE 1. 1.1 2. 1.2 1.10 SUGGESTED READINGS ANSWERS TO CHECK YOUR PROGRESS

1. Inroduction to The design and analysis of algorithms by Anany Levitin 2. Analysis and design of algorithms with C/C++ - 3rd edition by Prof. Nandagopalan 3. . Analysis and design of algorithms by Padma reddy 4. Even, Shimon., "Graph Algorithms",Computer Science Press.

46

MODULE-3,UNIT 2: : DIVIDE AND CONQUER Structure 2.0 1.1 1.2 1.3 1.4 1.6 1.7 1.8 1.9 1.10 4.0 Objectives Introduction General Structure of Divide and conquer Applications finding minimum and maximum Recurrence Equations Summary Key words Answers to check your progress Unit-end exercises and answers Suggested readings OBJECTIVES

At the end of this unit you will be able to 4.1 Find how to apply divide conquer Find the time complexity of a divide and conquer algorithms Identifying recurrence relations for the algorithm Know how to solve recurrence relations

INTRODUCTION Divide and conquer method of designing the algorithm is the best known method of Solving a problem. Now, let us see "What is divide and conquer technique? What is the general plan using which these algorithms work?" Definition: Divide and conquer is a top-down technique for designing algorithms that consist of dividing the problem into smaller sub problems hoping that the solutions of the sub problems are easier to find. The solutions of all smaller problems are then combined to get a solution for the original problem. The divide and conquer technique of solving a problem involves three steps at Each level of the recursion: . Divide: The problem is divided into a number of sub problems

47

. Conquer: The sub problems are conquered by solving them recursively. If the sub Problems are smaller in size, the problem can be solved using straightforward method . Combine: The solutions of sub problems are combined to get the solution for the larger problem.

1.1.1 Masters theorem to solve the recurrence relation The master theorem concerns recurrence relations of the form:

In the application to the analysis of a recursive algorithm, the constants and function take on the following significance:

n is the size of the problem. a is the number of subproblems in the recursion.

n/b is the size of each subproblem. (Here it is assumed that all subproblems are essentially the same size.) f (n) is the cost of the work done outside the recursive calls, which includes the cost of dividing the problem and the cost of merging the solutions to the subproblems.

It is possible to determine an asymptotic tight bound in these three cases: T(n)= { (nd ) if a< bd if a=bd if a>bd } (nd logn) (nd logn)

Note: Here, d is the power of n in f(n).

4.2

MAXIMUM AND MINIMUM

Let us consider another simple problem that can be solved by the divide-andconquer technique. The problem is to find the maximum and minimum items in a set of n elements. In analyzing the time complexity of this algorithm, we once again concentrate on the no. of element comparisons.

48

More importantly, when the elements in a[1:n] are polynomials, vectors, very large numbers, or strings of character, the cost of an element comparison is much higher than the cost of the other operations. Hence, the time is determined mainly by the total cost of the element comparison.

1. Algorithm straight MaxMin(a,n,max,min) 2. // set max to the maximum & min to the minimum of a[1:n] 3. { 4. max:=min:=a[1]; 5. for I:=2 to n do 6. { 7. if(a[I]>max) then max:=a[I]; 8. if(a[I]max) then max:=a[I]; Else if (a[I] than max half the time, and so, the avg. no. of comparison is 3n/2-1. A divide- and conquer algorithm for this problem would proceed as follows:

Let P=(n, a[I] ,,a[j]) denote an arbitrary instance of the problem. Here n is the no. of elements in the list (a[I],.,a[j]) and we are interested in

49

finding the maximum and minimum of the list. If the list has more than 2 elements, P has to be divided into smaller instances. For example , we might divide P into the 2 instances, P1=([n/2],a[1], ..a[n/2]) & P2= (n-[n/2],a[[n/2]+1],..,a[n]) After having divided P into 2 smaller sub problems, we can solve them by recursively invoking the same divide-and-conquer algorithm.

Algorithm: Recursively Finding the Maximum & Minimum Using Divide and conquer technique 1. Algorithm MaxMin (I,j,max,min) 2. //a[1:n] is a global array, parameters I & j 3. //are integers, 1= a[i]); . Once the above condition fails, if i is less than j exchange a[i] with a[j] and repeat all the above process as long as i 0 and b > 0. If we can pick 'a' and 'b' large enough so that n lg n + b > T(1). Then for n > 1, we have T(n) k=1 2/n (aklgk = 2a/n n-1k=1 klgk - 1/8(n2) + 2b/n (n -1) + (n) n-1

+ b) ------- 4

+

(n)

At this point we are claiming thatn-1

k=1 klgk 1/2 n2 lgn - 1/8(n2)

Stick this claim in the equation 4 above and we get

69

T(n)

2a/n [1/2 n2 lgn anlgn - an/4 + 2b + (n)

-

1/8(n2)]

+

2/n

b(n

-1)

+

(n)

In the above equation, we see that (n) + b and an/4 are polynomials and we certainly can choose 'a' large enough so that an/4 dominates (n) + b. We conclude that QUICKSORT's average running time is Conclusion : Quick sort is an in place sorting algorithm whose worst-case running time is (n2) and expected running time is (n lg n) where constants hidden in (n lg n) are small. (n lg(n)).

5.4

BINARY SEARCH Suppose we are given a number of integers stored in an array A, and we want to locate a specific target integer K in this array. If we do not have any information on how the integers are organized in the array, we have to sequentially examine each element of the array. This is known as linear search and would have a time complexity of O(n ) in the worst case. How ever, if the elements of the array are ordered, let us say in ascending order, and we wish to find out the position of an integer target K in the array, we need not make a sequential search over the complete array. We can make a faster search using the Binary search method. The basic idea is to start with an examination of the middle element of the array. This will lead to 3 possible situations: If this matches the target K, then search can terminate successfully, by printing out the index of the element in the array. On the other hand, if KA[middle], then further search is limited to elements to the right of A[middle]. If all elements are exhausted and the target is not found in the array, then the method returns a special value such as 1. Here is one version of the Binary Search function:

70

Algorithm : BinarySearch (int A[ ], int n, int K) { L=0, Mid, R= n-1; while (L A[Mid] ) L = Mid + 1; else R = Mid 1 ; } return 1 ; } Analysis of Binary search Best case : The best case occurs when the item to be searched is present in the middle of the array. So the total number of comparisons required will be 1.. Therefore, the time complexity of binary search in the best case is given by Tbest(n)=(1). Worst case: This case occurs when the key to be searched is in either at the first position or at the last position in the array. In such situations, maximum number of elements comparisons are required and the time complexity is given by T(n)={ 1 if n=1 T(n/2) + 1 otherwise Consider, t(n)=t(n/2)+1 This recurrence relation can be solved using repeated substitution as shown below: t(n)=t(n/2)+1 replace n by n/2 2 t(n)=1+1+t(n/2 ) t(n)=1+2+t(n/23 ) replace n by n/2 .. .. In general, t(n)=i+t(n/2i ) Finally to get the initial condition t(1),let 2i = n t(n)=i+t(1) Where t(1)=0 t(n)=i We have n=2i , Take log on both sides i*log2 =log2 n i= log2 n So, time complexity is given by Tavg(n) ( log2 n) Advantages of binary search Simple technique Very efficient searching technique Disadvantages of binary search The array should be sorted.

71

Check your progress 1. What is a sorting? 2. Write an algorithm for merge sort? Explain its working. 3. Write an algorithm for quick sort? calculate its best case, worst case and average case time complexity. 4. Explain the working of binary search with example? Write an algorithm and its time complexity. 5.5 SUMMARY Merge sort is a divide and conquer algorithm. It works by dividing an array into two halves, sorting them recursively, and then merging the two sorted halves to get the original array sorted. The algorithms time efficiency is same in all cases i.e (nlogn). Quick sort is a divide and conquer algorithm that works by partitioning its inputs elements according to their value relative to some pre-selected element. Quick sort is noted for its superior efficiency among nlogn algorithms for sorting randomly ordered arrays but also for the quadratic worst-case efficiency. Binary search is o(logn) algorithm for searching in sorted arrays. It is typical example of an application of the divide and conquer technique because it needs to solve just one problem of half the size on each of its iterations. ANSWERS TO CHECK YOUR PROGRESS 1. 1.1 2. 1.2 3. 1.3 4. 1.4 UNIT-END EXERCISES AND ANSWERS 10. a.)What is largest number of key comparisions made by binary search insearching for a key in the following array? { 3,14,27,31,39,42,55,70,74,81,85,93,98 } b) List all the keys of this array that will require the largest number of key comparisons when searched for by binary search. 11. Apply quick sort to the list A N A L Y S I S in alphabetical order.

72

12. Apply merge sort algorithm to sort A L G O R I T H M in alphabetical order? Is merge sort a stable algorithm? Answers: SEE 1. 1.4 2. 1.3 3. 1.2 5.6 SUGGESTED READINGS 1. Inroduction to The design and analysis of algorithms by Anany Levitin 2. Analysis and design of algorithms with C/C++ - 3rd edition by Prof. Nandagopalan

73

MODULE-3,UNIT 3: Structure 4.0 1.1 1.1.1 1.2 1.3 1.4 1.5 1.6 1.7 6.0 Objectives Introduction

GREEDY TECHNIQUE

Concept of greedy mehod Optimization Problems Summary Key words Answers to check your progress Unit-end exercises and answers Suggested readings OBJECTIVES

At the end of this unit you will be able to 6.1 Find how to apply Greedy technique Identifying whether to solve problem by using greedy technique Know how to find single source shortest path Construct Huffman tree and to generate Huffmans code.

INTRODUCTION Greedy algorithms are simple and straightforward. They are shortsighted in their approach in the sense that they take decisions on the basis of information at hand without worrying about the effect these decisions may have in the future. They are easy to invent, easy to implement and most of the time quite efficient. Many problems cannot be solved correctly by greedy approach. Greedy algorithms are used to solve optimization problems

74

1.1.1 Concept of Greedy meyhod Greedy Algorithm works by making the decision that seems most promising at any moment; it never reconsiders this decision, whatever situation may arise later. As an example consider the problem of "Making Change". Coins available are:

dollars (100 cents) quarters (25 cents) dimes (10 cents) nickels (5 cents) pennies (1 cent)

Problem : Make a change of a given amount using the smallest possible number of coins. Informal Algorithm

Start with nothing. at every stage without passing the given amount. add the largest to the coins already chosen.

Formal Algorithm Make change for n units using the least possible number of coins. MAKE-CHANGE (n) { C {100, 25, 10, 5, 1} //constants S {}; Sum 0 While sum !=n // Set that hold the solution

75

x=largest item in set C such that sum+x