Introduction
to the Design and Analysis of Algorithms
Textbook
 CS 3343
 Analysis of Algorithms
 Fall 2013

 Recitation 13
 In-Class Problems
Answers

    Week 13: Nov 20


  1. Reductions Suppose you are writing software for a high-precision floating point engine. (Instead of 64-bit floating point numbers, perhaps 1024-bit floating point numbers, say 1000 bits used for the significant part, and 24 bits used for the sign and exponent.) Suppose you have written reasonably efficient routines to do:

    Now you want to add other arithmetic functions, especially multiplication and division.

    We will look at 4 problems:

    We will be writing stuff like P1 ≤ P2 to mean a reduction of P1 to P2, similar to the book's p. However, here it's not polynomial-time, but rather an efficient reduction comparable to the other operations on these numbers. Intuitively, this means: if you can compute P2 efficiently, then you can compute P1 efficiently.

    1. If 1000 binary bits are used for the significant part, roughly how many decimal digits of accuracy would we have? [We've seen this before: the conversion factor is log10(2)=0.3010299956, so there are roughly 301 digits.]

    2. Two of the reductions are trivial. Which are they? (Think about this carefully.) [Ps ≤ Pm and Pr ≤ Pd are trivial. In each case the problem on the left is a special case of problem on the right, so obviously if you can solve the problems on the right, you can solve those on the left. In each case there is a trivial reduction of the problem on the left to the one on the right: just map the problem to itself.]

    3. Use the formula below to come up with another reduction: [The reduction is Pm ≤ Ps. This uses the fact that we can freely do addition, subtraction, negation and mult/div by powers of 2. The formula then clearly shows how to take squaring and use it to multiply.]

    4. Use the formula below to come up with yet another reduction: [The reduction is Ps ≤ Pr. This is similar to the previous item. The formula on the right only involves the reciprocal (in addition to operations listed at the beginning). On the left is just a square.]

    5. Use the following iteration formula from Newton's method (which we studied) to find another reduction.

      In this case you should explain in some detail how you intend to handle the initial guess for Newton's method, which should not be too far from the correct answer. [Hint: here is the page that covers this material.] [This was how we showed that if we can multiply, then Newton's method will let us calculate the reciprocal, a very big result. The reduction is Pr ≤ Pm.]

    6. Argue that if you can do both Pm and Pr efficiently, then you can also do Pd efficiently.
      [This is pretty much obvious because a / b = a * (1 / b). We don't have a notation for this. Perhaps write: Pd ≤ (Pm and Pr)]

    7. Argue that if we can do Ps efficiently, then we can also do the other three efficiently: Pm, Pr, and Pd. (This might be a quick and dirty way to do a half-way efficient implementation.) [Note: Not all the reductions above are needed to show this part vii.] [This follows from reductions above and transitivity:
      Pm ≤ Ps, so if we can square we can multiply,
      Pr ≤ Pm ≤ Ps so if we can square, we can find reciprocals,
      Pd ≤ (Pm and Pr) ≤ Ps and if we can square we can divide.]


Revision date: 2012-11-26. (Please use ISO 8601, the International Standard.)