CS 3343
Analysis of Algorithms
Fall 2013 Recitation 13
In-Class Problems
Answers
Week 13: Nov 20
Reductions
Suppose you are writing software for a high-precision floating
point engine. (Instead of 64-bit floating point numbers,
perhaps 1024-bit floating point numbers, say 1000
bits used for the significant part,
and 24 bits used for the sign and exponent.)
Suppose you have written reasonably efficient routines to do:
addition, subtraction, negation, multiply and divide by powers of 2,
Now you want to add other arithmetic functions, especially
multiplication and division.
We will look at 4 problems:
Pm = multiplication of two numbers (a * b) Pd = division of two numbers (a / b) Pr = reciprocal of a number (1 / a) Ps = square of a number (a2)
We will be writing stuff like
P1 ≤ P2
to mean a reduction of P1 to
P2, similar to the book's
≤p. However, here it's not polynomial-time,
but rather an efficient reduction comparable to the other
operations on these numbers. Intuitively, this means: if you
can compute P2 efficiently, then you can
compute P1 efficiently.
If 1000 binary bits are used for the significant part, roughly
how many decimal digits of accuracy would we have?
[We've seen this before: the conversion
factor is log10(2)=0.3010299956, so
there are roughly 301 digits.]
Two of the reductions are trivial. Which are they?
(Think about this carefully.)
[Ps ≤ Pm
and Pr ≤ Pd are
trivial. In each case the problem on the left is a special
case of problem on the right, so obviously if you can solve
the problems on the right, you can solve those on the left.
In each case there is a trivial reduction of the problem on
the left to the one on the right: just map the problem to itself.]
Use the formula below to come up with another
reduction:
[The reduction is
Pm ≤ Ps.
This uses the fact that we can freely do
addition, subtraction, negation and mult/div by powers of 2.
The formula then clearly shows how to take
squaring and use it to multiply.]
Use the formula below to come up with yet another
reduction:
[The reduction is
Ps ≤ Pr.
This is similar to the previous item. The formula on the right
only involves the reciprocal (in addition to operations listed
at the beginning). On the left is just a square.]
Use the following iteration formula
from Newton's method (which we studied) to find another reduction.
In this case you should explain in some detail how you intend to handle
the initial guess for Newton's method, which should not be too
far from the correct answer. [Hint:
here is the page that
covers this material.]
[This was how we showed that if we can multiply,
then Newton's method will let us calculate the reciprocal, a
very big result. The reduction is
Pr ≤ Pm.]
Argue that if you can do both Pm
and Pr efficiently, then you can also do
Pd efficiently.
[This is pretty much obvious because
a / b = a * (1 / b). We don't have a notation for
this. Perhaps write:
Pd ≤ (Pm and Pr)]
Argue that if we can do Ps efficiently,
then we can also do the other three efficiently:
Pm, Pr, and Pd.
(This might be a quick and dirty way to do a half-way efficient
implementation.)
[Note: Not all the reductions above
are needed to show this part vii.]
[This follows from reductions above and
transitivity: Pm ≤ Ps,
so if we can square we can multiply, Pr ≤ Pm ≤ Ps
so if we can square, we can find reciprocals, Pd ≤ (Pm and Pr) ≤ Ps
and if we can square we can divide.]
Revision date:2012-11-26.
(Please use ISO
8601, the International Standard.)