NAND2GO Chapter 3. Sequential Logic

From MIT Technology Roadmapping
Jump to navigation Jump to search
Chapter 2
Back to Content Page
Chapter 4
English version /[[{{{ Chinese version link}}} |Chinese version]]


1. Context

We have grasped the basic concepts and methods of combinational logic in the first 2 chapters.

2. Goal

Hardware expansion in time dimension Storage function Control function Increase computational thinking

3. Measurable Effects

Understand the role and significance of DFF, the fundamental chip of sequential logic Implement storage chips— Register, Memory Implement control chips— Counter Summarize the basic paradigm of establishing simple sequential chips

4. Outputs 5. Process 6. Inputs

Register Memory Counter

Clarify the characteristics and differences between sequential logic and combinational logic Understand the concept of Clock and time A preliminary understanding of Turing completeness Implement Register, Memory and Counter based on DFF, summarize the basic paradigm of establishing simple sequential chips

Understand the role and significance of DFF, the fundamental chip of sequential logic Implement storage chips— Register, Memory Implement control chips— Counter Summarize the basic paradigm of establishing simple sequential chips

7. External Factors

Introduction

In chapter 1 and 2, we have learned about Boolean logic and Boolean arithmetic. Compared to what we will discuss in this chapter, they can be collectively called combinational logic. Combinational logic means that the output of a combinational chip depends solely on the current input, regardless of what the previous output the chip has. According to previous chapters, no matter how complex a computable problem is (if a problem is computable, which means for certain input, the output is uniquely identified), if enough room and resource is provided, we can always build a chip (may be very big) to solve this problem. So is it enough to use combinational logic to build a computer like what we use every day?

Think about the screen we are facing to. At this very time, with our hands away from keyboard and mouse, we do not input anything to the computer, but the screen is still displaying, or outputting. Another obvious evidence is that the current output of a computer is dependent on not only the current input but also the previous state of the computer. So we can come to the conclusion that only combinational logic is not enough, there must be something working to maintain state, then the computer can memorize and recall information for future operations. That is what we will discuss in this chapter.

Maintaining state means time-dimensional expansion other than space-dimensional expansion, compared to combinational logic, and that’s why we call it sequential logic. Sequential logic has at least two main advantages. First, with the ability to maintain state, we don’t have to input all the values required all the time, we can input the value whenever it is used and then maintain it in some chip, as for future use of the same value, we needn’t to input it again. Second, it offers another way to solve complex problems. Although the computing resources are limited, we can use more time to deal with it and the result would be the same. A more elaborate discussion may be related to the concept of Turing completeness, readers of interest can do further reading mentioned in our additional material.

Since maintaining state or memorizing values is time-dependent, we must talk about the Clock. In most computers, there is a master clock that continuously delivers periodic change signals. Each sequential chip receives this time signal so that different chips can perform functions in a uniform time frame. Just as we use 1 and 0 to represent high and low (electric) potentials, the clock signal also expresses the time by periodically changing between high and low potentials. The clock signal changes from high potential to low potential, then becomes high potential after a period of time, and is referred to as one cycle from high potential to low potential again. For the discrete periodical time signal, we can use the number of cycles to quantize the time. A more beautiful form is both high potential and low potential takes 50% of the square wave signal, as shown below:

Fig 3.1 Clock Signal

We often also call the low potential for the tick phase, high potential for the tock phase. Obviously, for the clock signal, there are two special time nodes per cycle: falling from tock phase to tick phase (falling/negative edge) and rising from tick phase to tock phase (rising/positive edge). The falling edge is obviously the moment when the cycle start as we specified earlier, and the significance of the rising edge is ambiguous. In fact, the rising and falling edges of a signal in an analog circuit can have distinctly different physical meanings, but for our discrete periodic clock signal, a clock signal with only one pulse marking the start of a cycle seems enough. In fact, the signal transmission also takes time, due to the asymmetric circuit structure output of different chips take different time to be reflected in circuits of next stage.

Fig 3.2 Time Matter

For example, B chip’s and C chip’s output signal are delivered to A chip’s input, but due to circuit design, it takes more time for B chip’s signal to arrive at A chip than C chip. When C chip’s signal arrives and B chip’s signal does not arrive, A chip’s output is meaningless. If there is another signal to tell us that all the signals in the circuit have been updated, then the output of A is obviously accurate. The rising edge then makes sense. The falling edge marks the beginning of each cycle. By designing the duty cycle of the signal, the rising edge could mean that the circuit has responded adequately and we can read the signal we need. This approach can be followed in the hardware simulator design to make the idea clearer: simulate sequential chip-related changes and outputs on the tick phase, simulate the output of the combinational chip on the tock phase, and of course, may require additional considerations at the beginning of the simulation. Students who intend to implement hardware simulators can think further.

Next, we are going to discuss the most basic sequential components, it is clear that such components should have the function of maintaining state. We can imagine a flashlight that has two states of on and off (corresponding to outputs 1 and 0). It has a button that, when the flashlight is on, push the button and the flashlight turns off. When the flashlight is off, press the button and the flashlight turns on. Obviously, such a device is different from the combinational chip we have encountered before. The output of the combinational chip we encountered in the first two chapters is completely dependent on the input, but the output of the flashlight is not only related to the input, but also to the flashlight’s current state. We abstract such a flashlight into a chip, and combining the aforementioned concept of time, we can describe the function of the chip precisely:

Fig 3.3 DFF

Where t is the time characterizing the cycle of the clock signal. The output of the chip at the next moment depends on the input of the chip at this time. The small white triangles in the figure represents the input of the clock signal. Such a basic sequential logic chip is called data fl ip- fl op (DFF). In fact DFF can just use NAND gate to build, the following figure is a typical positive edge triggered DFF circuit. Students interested can analyze its operating mechanism. In our course, DFF is given as a built-in basic element in order to simplify clock logic and circuit complexity. Next we will use DFF and combinational chips in first 2 chapters to establish a series of commonly used sequential chips.

Fig 3.4 A positive-edge-triggered D flip-flop

One important difference between a combinational chip and a sequential chip is that a sequential chip have a state (flashlight is on or off). The output of the timing chip depends on its state (as determined by previous inputs and states) and the current input. The output of a combinational chip depends only on the current input. Another diff erence between them is that the DFF provides a delay eff ect, and such delay provides the possibility of circuit looping. The following fi gure shows a simple comparison:

In a combinational chip, if the circuit is looped, circuit is easy to become instable and oscillate. Because of the delay eff ect, DFF is equivalent to a partition, so that such instability does not occur and loopback is possible. The Turing completeness requires branch and loopback in hardware level, if Mux, DMux and other combinational chips help achieve the branch, then DFF of sequential logic help with loopback, Turing completeness get satisfied in hardware level so far.

Register

Let’s clarify what we have so far and what we need to do. What we have now is the combinational chips we have built in the first 2 chapters and DFFs synchronized with the master clock’s signal. A register is a storage device that can “maintain”, or “memorize” a value over time, and this is the aim of this part. Those can be summarized clearly by the following simplifi ed logic model, so in later parts of this chapter, we will directly come to a logic model.

Logic Model

Inputs: chips implemented in fi rst 2 chapters, DFF

Outputs: Bit (1-bit register), w-bit register

Eff ects: memorize and recall a bit or a word over time


Suppose we put a DFF directly into a circuit, we can see that it is always outputting last clock cycle’s input. It do maintain values, however it’s uncontrollable and it just maintains the value for one cycle, it behaves more like a delayer. To maintain a value for any long time, simply feeding the output of a DFF’s latter back into its input is OK. However, we cannot change the maintained value, because there would be a confl ict when input and output are diff erent. So a proper functioning 1-bit register should consists of 3 pins—in, out, and load. The load pin means when we load the register, the register would save the current in value, and then the out value would be the updated saved value. If there is no load signal, the out would maintain the value until a load signal comes. Obviously, load is a control signal and we can recall what chip we have implemented can help with this feature.

The 1-bit register is usually called Bit, as it can memorize a single bit over time. Just as what we do in previous chapters, we are to extend the 1-bit register to arbitrary width, and we usually called the w bits of a w-bit register a word. The reason is the same—multi-bit is the most fundamental parallel mechanism. With multi-bit, we can get the multi-result of a multi-bit input in one clock cycle( we can regard every bit of the result a function of w variables from each bit of the input word, so if we want the same result using 1-bit instead of w-bit, we may use w×w clock cycles at most). The extension from word to bit-matrix is similar, although the meaning of an operation becomes harder for human to understand, the efficiency improvement and operation method expansion are considerable.

In our course, we would implement the 16-bit register, consistent with the previous implemented chips and our expected design. The only difference between Bit and w-bit register is that w-bit register’s input and output is word instead of bit.

Memory

Logic Model

Inputs: chips implemented in first 2 chapters, DFF, register

Outputs: RAMn

Effects: design a bank of certain amount of registers, which can write or read word of any specified address

The word scale (mostly 16, 32 or 64 bits) operations is appropriate for most conditions. So we want to design a bank to save words, and we can put word into or take word out from any position of the bank. In this chapter and later ones, we will use write and read instead of memorize and recall, and we call the position of the bank address. Such kind of register bank is usually called Random Access Memory (RAM), the term random means that a word written in any specified address can be read at a same least time cost. A natural thought would be implementing small RAM first, such as RAM8 (RAM8 means that a RAM consists of 8 main registers for storage), and then by stacking small RAMs, we would implement bigger ones, further, we can implement RAM of any size. The structure of a RAM is similar to a simple communication network, which is a point of computational thinking.

Fig 3.6 RAMn Conceptual Structure

A classical RAM should include 3 inputs and 1 w-bit output. The 3 inputs are a data input, an address input and a load bit. The data input should be w-bit as the output is. The address should be in binary form and take several bits. When we load the RAM, the current input value would be written into the register of specified address, then the output changes to the input value. When the RAM is not loaded, the output would always be the word of specified address. In our course, we would implement RAMs from 8 up to 16k.

Counter

Logic Model

Inputs: chips implemented in first 2 chapters, DFF, register

Outputs: Counter

Effects: generate the instruction address in every clock cycle, make up control unit in von Neumann architecture

Counter, as the name suggests, is a chip that can count integers continuously from zero. A counter is often referred to as the Program Counter (PC), this is mainly due to its control features. Although the function of the counter is very simple, it only outputs an increasing integer every cycle of the clock signal. However, this integer can tell the computer which instruction to read and execute which implements the control function. The constantly adding-one operation executed by the counter, in fact, is the same as successor function in lambda calculus, and is also closely related to Turing completeness, which has a profound mathematical background and significance.

In order to enhance the control function of the counter, we often add a reset pin which can reset the output signal, load and in pin to modify the current counter value and an inc pin which can control whether to count or not. Until now, the function of the counter we are to build is basically clear. Oriented by such a design goal, now we are to think about the necessary components. First of all, the counter must have a state, the state is the counter's current count number. Obviously we must have a Register. Second, how to implement incremental function, have we implemented relevant chips in previous chapters? Third, what’s the logical relation between the control pins, which pin has the higher priority and what chip to reflect the priority order? Finally, we can draw a sketch on the paper to analyze its ability to meet our functional demands. Then use the hardware simulator for debugging, modify the circuit until there’s no error reported.

Project

Most details of implementation are mentioned in previous parts, this chapter would supply several tips.

DFF is a built-in chip, there’s no need to implement it.It’s a staging process to build from small memory to large memory. If we expand every component to register level, then we can see that it seems as if we implement a super Mux and a super DMux.

The single step function and variable display function are very useful, we can use them to find out where the problem locates.

Prospect

Now that we have implemented all the basic components necessary to implement major components of a computers, we will use these components to implement a simple computer in accordance with the von Neumann architecture in Chapter 5, which is the last part of the first half of our course. But before that, we need to understand the "language" that computers can understand (Chapter 4). Although the content of the first three chapters is not difficult, but computational thinking it reflects is very important.

In addition, there is much difference between actual chips and the chips built in our course, the actual chips are more complex and better optimized, which is necessary for us to understand.

Glossary

Sequential logic

Combinational logic

Clock

Data flip-flop/Delayed flip-flop (DFF)

Register

Memory

RAM

Counter

Turing completeness


References

Description: References are divided into books, journal articles, audio recordings, videos, web pages, research reports, etc. Please select the appropriate template for reference according to your needs.

  • ,《Nand2Tetris》, , , , ISBN:, ,

Additional Reading Material

  • Charles Petzold,《The Annotated Turing》, 1st Edition, Wiley, 2008, ISBN:0470229055(10);978-0470229057(13), ,

Chapter 2
Back to Content Page
Chapter 4
English version /[[{{{ Chinese version link}}} |Chinese version]]