Introduction
to the Design and Analysis of Algorithms
Textbook.
 CS 3343
 Analysis of Algorithms
 Fall 2012

 Recitation 14 
 Final Review 

    Partial Answers  

Topics NOT COVERED on the Final Exam. Note: As with the previous exam, you may bring to the final exam a single sheet of paper (8.5" by 11", or A4 size, that is, 210-by-297 mm) with anything written or printed on it that you want, even both sides.


  1. Use the Master Theorem to analyze each of the following recurrences. (The theorem may not work for some of them.)

    [Here are the answers:]

    Recurrences, solution by Master Method
    Recurrenceab logb(a)nlogb(a) f(n)CaseBehavior
    a. T(n) = 64 T(n / 4) + √n 64 4log4(64) = 3 n3n0.5 1 Θ(n3)
    b. T(n) = 9 T(n / 3) + n2 log(n) 9 3log3(9) = 2 n2n2 log(n) noneMaster fails
    c. T(n) = 16 T(n / 4) + n4 16 4log4(16) = 2 n2n4 3 Θ(n4)
    d. T(n) = 2 T(n / 4) + √n 2 4log4(2) = 1/2 n1/2 = √n √n 2 Θ(√n log(n))


  2. Strassen's Method for Matrix Multiplication:
    1. Consider the matrix multiplication of 4-by-4 matrices.
      1. Using the ordinary method of matrix multiplication, how many scalar multiplications are required? Don't just state this number, but calculate the number from the ordinary matrix multiplication algorithm, where each element of the answer is the inner product of a length 4 row times a length 4 column. [Each inner product takes 4 mults, and there are 4*4 = 16 such, so there are 4*4*4 = 64 mults altogether.]

      2. Using the divide-and-conquer approach and Strassen's method of multiplying 2-by-2 matrices, how many scaler multiplications would be required? Again justify your answer. [Regarding the 4-by-4 matrices as 2-by-2 matrices of 2-by-2 matrices, at the top level, Strassen's method would take 7 mults of 2-by-2 matrices, and each of these would take 7 scalar mults, so there would be 7*7 = 49 scalar mults altogether. (A "scalar mult." means a multiplication of actual numbers, rather than matrices.)]

    2. Now consider 5-by-5 matrices. Again how many scalar multiplications would be required using ordinary matrix multiplication? (Just a number for the answer.) Suppose one could multiply 5-by-5 matrices using 91 scalar multiplications. Give the recurrence for a divide-and-conquer algorithm based on this supposition. Use the Master Method (or some other approach) so get the asymptotic performance in this case. Would this be an interesting result? [The recurrence in this case would be

        T(n) = 91 * T(n/5) + Θ(n2).

      The Master Method gives performance of

        Θ(nlog5(91)) = Θ(n2.8027545).

      Because this is better than Strassen's method at Θ(nlog2(7)) = Θ(n2.8073549), it would definitely be of interest. (Notice how close these two are.)]


  3. Dijkstra's Algorithm: Refer to Problem 4 in Recitation 13 and refer to its answer, which is at Answer to Problem 4, along with the other sample problems.


  4. Breadth-First and Depth-First Searches of Graphs: By hand, run through a separate BFS and a separate DFS for the graph below, starting with vertex a. Assume that vertices are ordered alphabetically in the adjacency lists. Show the vertices in the order that they are visited. For each search you must use a stack or a queue, whichever is appropriate, but should not use recursion for this problem. Show the insertions and removals for each case. In each case the sequence of removals should be your final answer. (You don't need to gather any of the other information that was obtained when we did graph searches.)

    [It was not necessary to fill out the full tables below, especially if the answer was correct.]

    Queue: add at right, remove from left
    BFS QueueActionAdd to Queue
    Initial -- Add a
    aRemove a and visit a  Add b, f 
    b fRemove b and visit bAdd c, d
    f c dRemove f and visit fAdd g
    c d gRemove c and visit cAdd e
    d g eRemove d and visit dAdd h
    g e hRemove g and visit gAdd -
    e hRemove e and visit eAdd -
    hRemove h and visit hAdd -
    BFS Order: a b f c d g e h
       
    Stack: push at right, remove from right
    DFS StackActionPush on Stack
    Initial -- Push a
    aPop a and visit a  Push b, f 
    b fPop f and visit fPush g
    b gPop g and visit gPush h
    b hPop h and visit hPush -
    b Pop b and visit bPush c, d
    c dPop d and visit dPush e
    c ePop e and visit ePush -
    c Pop c and visit cPush -
    DFS Order: a f g h b d e c


  5. Example of Polynomial Time Equivalence. Consider the two problems:

    Consider also the two reductions L1 ≤p L2 and L2 ≤p L1.

    L1 ≤p L2 means to reduce the problem of solving L1 to the problem of solving L2. This means that we start with an instance of L1 and try to find a corresponding instance of L2 is such a way that the solution to the L2 instance gives us the solution to the L1 instance.

    Another way to say this is that assuming we can solve problems in L2, does that let us solve problems in L1?

    1. One of these reductions is trivial. Which one is "trivial" and why do I use that word.?The direction L1 ≤p L2 is the trivial one, because if we can take the square root of any positive number, we can certainly take the square root of any number in the range 1/4 <= x <= 1. Another way to say this is: given any desired sqrt(x) in the range 1/4 <= x <= 1, we associate it with exactly the same problem in the greater range, x > 0. If we can find sqrt(x) for any x > 0, then certainly we can find sqrt(x) for any 1/4 <= x <= 1.

    2. The other is something that we proved. What rough technique did we use to prove it?To prove the other direction: L2 ≤p L1, we start knowing that we can find sqrt(x) in the range 1/4 <= x <= 1. We start with an arbitrary sqrt(x), where x > 0. In this case we normalized the arbitrary case by multiplying or dividing repeately by 4, until our arbitrary problem was in the range 1/4 <= x <= 1. Then take the square root. Finally by dividing or multiplying by 2 to the same power as for normalization, we denormalize and get a solution to original square root.


  6. Problems in NP: This problem has nothing to do with NP-complete problems, but just NP.
    1. Is any problem in P (polynomial-time problems) also in NP? Yes. Using the "guessing-stage / checking-stage" definition of NP below, we see that a problem in P satisfies this definition without making use of the guessing-stage at all.
    2. Is P the same as NP? Can you prove this? Most researchers think now that P is a proper subset of NP, but no one has been able to prove this.
    3. Give the "guessing-stage / checking-stage" definition that shows a problem is in NP. NP is the class of problems solvable on a nondeterministic computer. A nondeterministic computer can carry out a nondeterministic algorithm. Such an algorithm can proceed in two stages:
      1. Guessing stage: the algorithm "guesses" a structure S, which either is the answer to the problem or will help compute the answer.
      2. Checking stage: the computer proceeds in a normal deterministic fashion to use S in arriving at a Yes-No answer to the problem. The checking stage must be done in polynomial time. In effect S is the answer and the checking stage verifies in polynomial time that S is the answer.


  7. Two NP-complete Problems: Subset-Sum and Partition: We already looked at the Subset-Sum Problem (Recitation 7), but the notation below is slightly different than before:

    Subset-Sum Problem (SS):
    Start with a set A of positive integers and a desired sum S.
    The decision question: Is there a subset A' of A such that sum( a | a in A' ) = S?

    Next is the Parition Problem (PART):

    Partition Problem (PART):
    Start with a set A of positive integers.
    The decision question: Is there a subset A' of A such that
    sum( a | a in A' ) =  sum( a | a in A - A' )?
    [Here A - A' is the complement of A' in A.]

    Show that PART ≤p SS, that is, show that PART can be reduced to SS.

    We want to show that if SS can be solved efficiently, then PART can be solved efficiently. In order to do this, we need to start with an arbitrary problem in PART. Then come up with a way to transform any problem P in PART into a problem P' in SS. Finally show that the answer is "Yes" to P if and only if (exactly when) the answer is "Yes" to P'.

    Start with an instance P of PART, that is, start with an array A of positive integers. We assume that we can solve the SS problem. In order to answer "Yes" to P, we need to find a subset A' of A such that sum( a | a in A' ) =  sum( a | a in A - A' ).

    So construct an instance P' of SS. The array A in SS will be the same as the one in PART. Let T be the sum of all elements of A: T = sum( a | a in A ). If T is odd, then the answer to P' is immediately "No". Otherwise, use SS to try to find a subset A' such that S = T/2 = sum( a | a in A' ). Then clearly S = T/2 = sum( a | a in A-A' ). There will be such a subset A' given by SS if and only if the same subset A' works to solve P in PART.

    Which of the two following facts does this prove:

    Part a. was shown above.

    In order to prove that a problem P is NP-complete, you need to find a problem P' that you already know is NP-complete, and you need to reduce P' to P, that is, you need to prove that P' ≤p P. Since P' is known to be NP-complete, by definition,

    Then

    so by transivity, we have

    which is the definition that P is NP-complete.


    In the discussion above, PART is P' and SS is P, so P' = PART would be the known NP-complete problem and P = SS would the problem to be proved NP-compelete.


  8. Turing Machines, Undecidable Problems: Consider the following "mystery" program in the language used for our Turing machines:

    Instructions in the Turing Language
    InstructionRepresentation Op
    Code
    print 0 000 0
    print 1 001 1
    move left 010 2
    move right 011 3
    stop 100 4
    if 0 is scanned
     goto i
    101 0...0 1
        <-i->
    5
    if 1 is scanned
     goto j
    110 1...1 0
        <-j->
    6
    (end marker) 111 -
     
    Mystery Program ??
    InstructionRepresentation
     1. move left
     2. if 1 is scanned goto 1
     3. if 0 is scanned goto 4
     4. print 1
     5. move right
     6. if 1 is scanned goto 5
     7. print 1
     8. move right
     9. if 0 is scanned goto 1
    10. stop
        (end)
    
    Initial   ...0000000000...
     tape:           |
    
    010
    110 1 0
    101 0000 1
    001
    011
    110 11111 0
    001
    011
    101 0 1
    100
    111
    

    Explain carefully what this mystery program does. What is its output and what actions does it go through to produce this output? (This is easier than you might think, although in a sense it's a bit of a trick.)

[This is a TM that will never halt. It keeps going back and forth, tacking a "1" on at either end of an ever-growing sequence of 1's. In more detail:

lines 2-3 search left for the first 0.
line 4 overwrites this first 0 with a 1.
lines 5-6 search right for the first 0.
line 7 overwrites this first 0 with a 1.
line 8 moves one more square right, where we know there is a 0.
line 9 always goes back to step 1, since we know the current square is 0.
line 10 is never executed.

The execution will never stop; it goes on indefinitely (or "forever"). But there will never be more than finitely many 1s on the tape.


Note 1: It is difficult to produce simple programs is this language, especially ones that halt.


Note 1: The program can be simplified:
lines 8, 9, and 10 could be replaced by
new line 8: if 1 is scanned goto 1.
Since line 7 wrote a 1, the new line 8 will always transfer to step 1. There is no need for a stop instructon at line 9, since it would never be executed. (I put in a stop so you wouldn't immediately know it was either an invalid program or had an infinite loop.]


Revision date: 2012-12-11. (Please use ISO 8601, the International Standard.)