CS 2734
Computer Organization II -- Spring 2001
Review for Exam 2 (16 April 2001)
- Memory elements (Section B.5, B-21 to B-25. You should be able to
explain how the following items work.
- Cross-coupled NOR gates that store an internal value.
- The D latch, which responds to asserted clock signal.
- The D flip-flop, which responds to a falling clock signal.
- Clocks -- the basic idea of how clocked circuits work,
with a latched state element feeding into combinational logic,
which in turn feeds in a loop back to the latched state element for
the next clock cycle.
- Section B.4, pages B-18 to B-21.
- Chapter 5, pages 341-343.
- Datapath Overview (Sections 5.1 and 5.2). Rough organization
and components, including
- Instruction Register (Not a true component in the single-cycle
implementation -- see below.)
- Instruction Memory (Read-only, for instructions.)
- Register File (Read and write the 32 registers.)
- ALU (Combinational logic to do arithmetic.)
- Data Memory (Read and write memory.)
- Single-cycle Implementation (Section 5.3) Note: Labs 8 and 9
are also relevant here.
- Above components, plus
- Implementation of the five basic instructions.
- R-type (add, sub, etc.)
- lw
- sw
- beq
- j
- Be able to trace through a single-cycle diagram for
each instruction (such as Figure 5.29).
- Understand control signals: which ones need to be asserted
for a given instruction. (You do not need to memorize the
tables that specify control signals, but you should understand
how they work, including the reasons for "don't care" entries.)
- Why the single-cycle implementation is not used.
- Multi-cycle Implementation (Sections 5.4 and 5.6)
- Above components, plus
- Instruction register. (Note:
An "instruction register" component is not included
in the book's single-cycle implementation, since this
was not really a register (did not have state
in the single-cycle implementation).)
- Use of a register between each pair of components,
so that there is a latch between clock cycles. Thus
the components in order of execution of a lw instruction are:
(Latched registers are bold, while other functional units are in
italics.)
- Latched register: PC
- Component without state: (Instruction) Memory
- Latched register: Instruction Register
- Component with state: Register File (read)
- Latched registers: A and B Registers
- Component, combinational logic: ALU
- Latched register: ALUOut Register
- Component with state: (Data) Memory
- Latched register: Memory Data Register
- Component with state: Register File (write)
- Implementation of basic instructions.
- R-type (add, sub, etc.)
- lw
- sw
- beq
- j
- Five cycles for instructions (only lw needs all five).
- Be able to trace through a multi-cycle diagram for
each instruction. (You should try to have Figure 5.35
more-or-less memorized.)
- Understand control signals and which are needed.
(Not just one set of control signals for each instruction, but
different control signals for different cycles of different instructions.)
- Finite state machine for control (Section 5.4).
- The new Control component must have state because its
outputs (the control signals) depend on the instruction op code
and the particular cycle of that instruction.
- One can use a finite state machine to specify the control.
- Appendix C talks about implementing this machine (we did not
cover this).
- The book's machine had 10 states (0 through 9) giving
all possibilities for each cycle of each instruction.
- A simple implementation needs an extra state register that
holds the current state, so that the outputs can depend on the
op code input as well as the previous state number. The next
state will also be an output.
- Advantages of multi-cycle implementation (text, page 377).
- Can share functional units during different clock cycles,
thus saving hardware.
- Allows instructions with different numbers of (shorter) cycles.
(Thus, don't have to make the cycle as long as the longest instruction.)
- Exceptions and interrupts, in the multi-cycle implementation
(Section 5.6)
- Exceptions (such as overflow or undefined instruction)
are internal.
- Interrupts (such as I/O device request) are external.
- Implemented either
- with a single interrupt handler and
a cause register to show the reason for the interrupt (as in MIPS),
or,
- with a vectored interrup, where there are different
interrupt handlers in a vector for different kinds of causes.
- The text shows interrupts for overflow or for an
undefined instruction.
In either case, as extra control signal loads
C0 00 00 00, the address of the handler, into the PC.
Also set the cause register to 0 for an undefined instruction
or to 1 for overflow. MIPS saves the old PC value in another
register (EPC register), so that control can be transferred
back to the program if desired.
- Finite state machine for Control unit needs two extra states,
10 and 11.
- We talked in class about using interrupts to implement
new floating point instructions in software, without adding
hardware.
- Exceptions and the trap handler in MIPS (Lab 10).
- Coding and information theory in computer storage and communications
- Shannon's information theory
- Intuitive idea of information = amount of ``surprise''
in a message.
- Entropy = amount of information in a message.
- Definition of Entropy. Examples.
- Idea of Channel Capacity and its formula for
the Binary Symmetric Channel.
- Examples of the error rate when a bit is repeated n
times when transmitted.
- The three types of coding:
- Source coding, using data compression.
- Channel coding, using error detection/correction.
This is the area we went into in more detail, including
a discussion of the Hamming code to do single-error
correction and double-error detection:
- The idea of a parity check (even parity) using an extra
check bit.
- Assume the message to be corrected by the Hamming code
is in bit positions 1, 2, 3, 4, 5, ....
- The Hamming code uses bits in positions 1, 2, 4, 8, 16, 32, ...
for (even) parity checks of selected bits (not all of them).
- The parity bit in position 1 checks odd-numbered bits
(those for which the position number written in binary has
a 1 as least significant bit).
- The parity bit in position 2 checks bits 2, 3, 6, 7, 10, 11,
14, 15, ...
(those for which the position number written in binary has
a 1 as next-to-least significant bit).
- The parity bit in position 4 checks bits 4, 5, 6, 7,
12, 13, 14, 15, ...
(those for which the position number written in binary has
a 1 as second-from-least significant bit).
- The parity bit in position 8 checks bits 8, 9, 10, 11,
12, 13, 14, 15, 24, ...
(those for which the position number written in binary has
a 1 as third-from-least significant bit).
- The parity bit in position 16 checks bits 16, 17, 18, 19,
20, 21, ...
(those for which the position number written in binary has
a 1 as third-from-least significant bit).
- We went over an example showing how it works.
(See tables at the end for this example.)
- Add a parity bit in position 0 to check all bits.
This gives double error detection as well as single error
correction.
- Secrecy coding, using cryptography.
- Microprogramming (Section 5.5).
- Brief discussion of this topic.
- Microinstructions allow easier implementation of very
complex higher-level instructions.
- Result may be much slower.
- Examples in class included implementing the same machine model
with very different classes of hardware. (As in the IBM 360.)
Here is a table showing the parity checks for the first 18 positions
of the Hamming code. (Check bits are in positions 1, 2, 4, 8, and 16.)
Position
Number |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
Binary |
1 |
10 |
11 |
100 |
101 |
110 |
111 |
1000 |
1001 |
1010 |
1011 |
1100 |
1101 |
1110 |
1111 |
10000 |
10001 |
10010 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Check:1 |
x |
|
x |
|
x |
|
x |
|
x |
|
x |
|
x |
|
x |
|
x |
|
Check:2 |
|
x |
x |
|
|
x |
x |
|
|
x |
x |
|
|
x |
x |
|
|
x |
Check:4 |
|
|
|
x |
x |
x |
x |
|
|
|
|
x |
x |
x |
x |
|
|
|
Check:8 |
|
|
|
|
|
|
|
x |
x |
x |
x |
x |
x |
x |
x |
|
|
|
Check:16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
x |
x |
x |
Here is a table showing the encoding of data bits 1101101 (in black below).
Additional check bits are determined for postions 1, 2, 4, and 8,
to yield the word 11101010101 below, with check bits in red italic below.
Position
Number |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
Binary |
1 |
10 |
11 |
100 |
101 |
110 |
111 |
1000 |
1001 |
1010 |
1011 |
Word |
1 |
1 |
1 |
0 |
1 |
0 |
1 |
0 |
1 |
0 |
1 |
|
|
|
|
|
|
|
|
|
|
|
|
Check:1 |
1 |
|
1 |
|
1 |
|
1 |
|
1 |
|
1 |
Check:2 |
|
1 |
1 |
|
|
0 |
1 |
|
|
0 |
1 |
Check:4 |
|
|
|
0 |
1 |
0 |
1 |
|
|
|
|
Check:8 |
|
|
|
|
|
|
|
0 |
1 |
0 |
1 |
Here is the result of a single error in position 11 (changed for
a 1 to a 0). Three of the four parity checks fail, as shown below.
Adding the position number of each failing check gives the position
number of the error bit, 11 in this case.
Position
Number |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
Result of
Check |
Binary |
1 |
10 |
11 |
100 |
101 |
110 |
111 |
1000 |
1001 |
1010 |
1011 |
|
Word |
1 |
1 |
1 |
0 |
1 |
0 |
1 |
0 |
1 |
0 |
0 (error) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Check:1 |
1 |
|
1 |
|
1 |
|
1 |
|
1 |
|
0 |
1 fails |
Check:2 |
|
1 |
1 |
|
|
0 |
1 |
|
|
0 |
0 |
2 fails |
Check:4 |
|
|
|
0 |
1 |
0 |
1 |
|
|
|
|
- passes |
Check:8 |
|
|
|
|
|
|
|
0 |
1 |
0 |
0 |
8 fails |