Reductions
Suppose you are writing software for a high-precision floating
point engine. (Instead of 64-bit floating point numbers,
perhaps 1024-bit floating point numbers, say 1000
bits used for the significant part,
and 24 bits used for the sign and exponent.)
Suppose you have written reasonably efficient routines to do:
addition, subtraction, negation, multiply and divide by powers of 2,
Now you want to add other arithmetic functions, especially
multiplication and division.
We will look at 4 problems:
Pm = multiplication of two numbers (a * b) Pd = division of two numbers (a / b) Pr = reciprocal of a number (1 / a) Ps = square of a number (a2)
We will be writing stuff like
P1 ≤ P2
to mean a reduction of P1 to
P2, similar to the book's
≤p. However, here it's not polynomial-time,
but rather an efficient reduction comparable to the other
operations on these numbers. Intuitively, this means: if you
can compute P2 efficiently, then you can
compute P1 efficiently.
If 1000 binary bits are used for the significant part, roughly
how many decimal digits of accuracy would we have?
Two of the reductions are trivial. Which are they?
(Think about this carefully.)
Use the formula below to come up with another
reduction:
Use the formula below to come up with yet another
reduction:
Use the following iteration formula
from Newton's method (which we studied) to find another reduction.
In this case you should explain in some detail how you intend to handle
the initial guess for Newton's method, which should not be too
far from the correct answer. [Hint:
here is the page that
covers this material.]
Argue that if you can do both Pm
and Pr efficiently, then you can also do
Pd efficiently.
Argue that if we can do Ps efficiently,
then we can also do the other three efficiently:
Pm, Pr, and Pd.
(This might be a quick and dirty way to do a half-way efficient
implementation.)
[Note: Not all the reductions above
are needed to show this part vii.]
Revision date:2012-11-26.
(Please use ISO
8601, the International Standard.)