CLASS 4

Five Generations of Modern Computers

First Generation (1945-1956)

First generation computers were characterized by the fact that operating instructions were made-to-order for the specific task for which the computer was to be used. Each computer had a different binary-coded program called a machine language that told it how to operate. This made the computer difficult to program and limited its versatility and speed. Other distinctive features of first generation computers were the use of vacuum tubes (responsible for their breathtaking size) and magnetic drums for data storage.

Second Generation Computers (1956-1963)

By 1948, the invention of the transistor greatly changed the computer's development. The transistor replaced the large, cumbersome vacuum tube in televisions, radios and computers. As a result, the size of electronic machinery has been shrinking ever since. The transistor was at work in the computer by 1956. Coupled with early advances in magnetic-core memory, transistors led to second generation computers that were smaller, faster, more reliable and more energy-efficient than their predecessors. The first large-scale machines to take advantage of this transistor technology were early supercomputers, Stretch by IBM and LARC by Sperry-Rand. These computers, both developed for atomic energy laboratories, could handle an enormous amount of data, a capability much in demand by atomic scientists. The machines were costly, however, and tended to be too powerful for the business sector's computing needs, thereby limiting their attractiveness.

Second generation computers replaced machine language with assembly language, allowing abbreviated programming codes to replace long, difficult binary codes.

These second generation computers were also of solid state design, and contained transistors in place of vacuum tubes. They also contained all the components we associate with the modern day computer: printers, tape storage, disk storage, memory, operating systems, and stored programs.

It was the stored program and programming language that gave computers the flexibility to finally be cost effective and productive for business use. The stored program concept meant that instructions to run a computer for a specific function (known as a program) were held inside the computer's memory, and could quickly be replaced by a different set of instructions for a different function. A computer could print customer invoices and minutes later design products or calculate paychecks. More sophisticated high-level languages such as COBOL (Common Business-Oriented Language) and FORTRAN (Formula Translator) came into common use during this time, and have expanded to the current day. These languages replaced cryptic binary machine code with words, sentences, and mathematical formulas, making it much easier to program a computer. New types of careers (programmer, analyst, and computer systems expert) and the entire software industry began with second generation computers.

Third Generation Computers (1964-1971)

Though transistors were clearly an improvement over the vacuum tube, they still generated a great deal of heat, which damaged the computer's sensitive internal parts. The quartz rock eliminated this problem. Jack Kilby, an engineer with Texas Instruments, developed the integrated circuit (IC) in 1958. The IC combined three electronic components onto a small silicon disc, which was made from quartz. Scientists later managed to fit even more components on a single chip, called a semiconductor. As a result, computers became ever smaller as more components were squeezed onto the chip. Another third-generation development included the use of an operating system that allowed machines to run many different programs at once with a central program that monitored and coordinated the computer's memory.

Fourth Generation (1971-Present)

After the integrated circuits, the only place to go was down - in size, that is. Large scale integration (LSI) could fit hundreds of components onto one chip. By the 1980's, very large scale integration (VLSI) squeezed hundreds of thousands of components onto a chip. Ultra-large scale integration (ULSI) increased that number into the millions. The ability to fit so much onto an area about half the size of a U.S. dime helped diminish the size and price of computers. It also increased their power, efficiency and reliability. The Intel 4004 chip, developed in 1971, took the integrated circuit one step further by locating all the components of a computer (central processing unit, memory, and input and output controls) on a minuscule chip. Whereas previously the integrated circuit had had to be manufactured to fit a special purpose, now one microprocessor could be manufactured and then programmed to meet any number of demands.

In 1981, IBM introduced its personal computer (PC) for use in the home, office and schools. The 1980's saw an expansion in computer use in all three arenas as clones of the IBM PC made the personal computer even more affordable. The number of personal computers in use more than doubled from 2 million in 1981 to 5.5 million in 1982. Ten years later, 65 million PCs were being used. Computers continued their trend toward a smaller size, working their way down from desktop to laptop computers (which could fit inside a briefcase) to palmtop (able to fit inside a breast pocket). In direct competition with IBM's PC was Apple's Macintosh line, introduced in 1984. Notable for its user-friendly design, the Macintosh offered an operating system that allowed users to move screen icons instead of typing instructions. Users controlled the screen cursor using a mouse, a device that mimicked the movement of one's hand on the computer screen.

As computers became more widespread in the workplace, new ways to harness their potential developed. As smaller computers became more powerful, they could be linked together, or networked, to share memory space, software, information and communicate with each other. As opposed to a mainframe computer, which was one powerful computer that shared time with many terminals for many applications, networked computers allowed individual computers to form electronic co-ops. Using either direct wiring, called a Local Area Network (LAN), or telephone lines, these networks could reach enormous proportions. A global web of computer circuitry, the Internet, for example, links computers worldwide into a single network of information. The most popular use today for computer networks such as the Internet is electronic mail, or E-mail, which allows users to type in a computer address and send messages through networked terminals across the office or across the world.

Fifth Generation (Present and Beyond)

Defining the fifth generation of computers is somewhat difficult because the field is in its infancy. With artificial intelligence, the computer should reason well enough to hold conversations with its human operators, use visual input, and learn from its own experiences.

Using recent engineering advances, computers may be able to accept spoken word instructions and imitate human reasoning. The ability to translate a foreign language is also a major goal of fifth generation computers. This feat seemed a simple objective at first, but appeared much more difficult when programmers realized that human understanding relies as much on context and meaning as it does on the simple translation of words.

Many advances in the science of computer design and technology are coming together to enable the creation of fifth-generation computers. Two such engineering advances are parallel processing, which replaces von Neumann's single central processing unit design with a system harnessing the power of many CPUs to work as one. Another advance is superconductor technology, which allows the flow of electricity with little or no resistance, greatly improving the speed of information flow. Computers today have some attributes of fifth generation computers. For example, expert systems assist doctors in making diagnoses by applying the problem-solving steps a doctor might use in assessing a patient's needs. It will take several more years of development before expert systems are in widespread use.


Evolution of Computer Architecture

Initial computers were CISC (Complex Instruction Set Computer) based, with a complex and large instruction set, and several addressing modes. e.g. of such machines are IBM 360, INTEL 8086, 8085, 8088, 80286, 80386, VAX, Motorola, etc.

There were significant advances in HLL, and constructs like ifelse, while, for etc. were used frequently, but no instructions for the same in assembly language. This resulted in a Semantic Gap. This led to increase in the size of the instruction set, and many addressing modes, complicating the assembly language.

There was increasing use of microcode.

Further, RAM was very expensive, and it's speed was very slow compared to that of the CPU, so calling of library routines for instructions very expensive and time consuming. But easy to put the routines into microprograms in the ROM.

With advances in Semiconductor technology, RAM speed much faster these days. Also, maintaining microcode a big problem, and too complex. Further, assembly language became more and more complex, with the programmer having to remember several instructions and their addressing modes.

Also, analysis of instruction showed that most of the time spent in doing common instructions.

About 95% of time spent for assignment statements, if, call, loop, and goto instructions

85% of instructions used only one variable, 10% used two, and only 5% used three or more variables.

As machines language (instruction set) becomes larger, the interpreter (microprogram) gets more and more complicated. And most of the instructions hardly ever used. This led to development of RISC (Reduced Instruction Set Computer) architecture.

Design principles for RISC machines

  1. Analyze the applications, find key operations
  2. Design data path optimal for key operations (set of registers required, ALU, and the data bus)
  3. Design insructions for the performing the key operations using the data path
  4. Add new instructions only if they do not slow down the machine
  5. Repeat process for other resourses.

RISC GOLDEN RULE - Sacrifice everything to reduce the data path cycle time

Difference Between RISC and CISC

Serial Number RISC CISC
1. Simple instructions taking 1 cycle Complex instructions tak ing multiple cycles
2. Only LOAD/STORES reference memory Any instruction may reference memory
3. Highly pipelined Not pipelined or less pipelined
4. Instructions executed by the hardware Instructions interpreted by the microprogram
5. Fixed format instructions Variable format instructions
6. Few instructions and modes Many instructions and modes
7. Complexity is in the compiler Complexity is in the micro program
8. Multiple register sets Single register set

Almost all instructions in the architecture are completed in one cycle, so any instruction not completed in a cycle is not part of the instruction set. eg. mul, div, etc.

LOAD/STORE architecture - only these instructions reference memory, all others only have register operands.

Pipelining - discussed earlier

No microcode - Instructions are directly executeds by the hardware, eliminating the need for microcode. There is a significant advantage NOT in terms of speed, but in terms of memory and simplicity.

Fixed-Format Instructions - unlike 80386 (instructions ranging from 1 to 17 bytes in length)

Reduced Instruction Set

Multiple Register sets - as more chip area available, so more registers, and less LOAD and STORE operations required, making the code far more efficient.


Number Systems: Decimal, Binary, Octal, Hexadecimal

Binary Devices : Base 2

Only two states: OFF (0) and ON (1). But more states can be represented as a combination of these two states.

e.g. if 1 bit used, then only two possible states 0 and 1.

if 2 bits used, then 4 states: 00, 01, 10, 11

If n bits used, what is the total number of states possible ??


Octal Number System : Base 8

Here, 8 possible states, from 0 to 7.
Why is this used ?

Hexa-decimal Number System : Base 16

Here, 16 possible states, from 0 to 15. States 0 to 9 represented by numbers, and states 10 to 15 represented by characters `a' thru `f', i.e. 10 - a, 11 - b, 12 - c... and so on. Very popular, commonly used system for computers...why?

Conversions from one number system to another

  1. Binary to Decimal e.g. 111 in binary is equal to 7 in decimal

    10101 in binary is equal to 21 in decimal

  2. Binary to Octal e.g. 111 in binary is equal to 7 in octal

    10101 in binary is equal to 25 in octal

  3. Binary to Hexa-decimal e.g. 111 in binary is equal to 7 in hexa-decimal

    10101 in binary is equal to 15 in hexa-decimal
    1111111 in binary is equal to 7f in hexa-decimal

  4. Decimal to Binary e.g. 32 in decimal is equal to 100000 in binary
    24 in decimal is equal to 11000 in binary

  5. Decimal to Octal e.g. 32 in decimal == 40 in octal
    24 in decimal == 30 in octal
    Alternate way of doing the conversion - first from decimal to binary, and then from binary to octal

  6. Decimal to Hexa-decimal e.g. 32 in decimal == 20 in hexa-decimal
    21 in decimal == 15 in hexa-decimal

  7. Octal to Decimal eg. 27 in octal == 23 in decimal

  8. Octal to Binary e.g. 345 in octal == 11100101 in octal

  9. Octal to Hexa-decimal First to binary, then to hexadecimal

  10. Hexa-decimal to Binary e.g. ab10 in hex == 1010101100010000 in binary

  11. Hexa-decimal to Decimal e.g. ab in hex == 171 in decimal

  12. Hexa-decimal to Octal First to binary, then to octal

ASCII (American Standard Code for Information Interchange) codes and Bit-wise Logical Operations and Truth tables

not, and, andn, or, xor, nand, nor, orn, etc
a andn b : a and (not b)
a orn b : a or (not b)
a nand b : not (a and b)
a nor b : not (a or b)
and so on.....

Truth Tables : What are they??? How do we write them ???

Are we comfortable with binary arithmetic ???? Don't worry, we'll talk about all this in more detail sometime in the fourth week.

For class 5 notes, click here


For more information, contact me at tvohra@mtu.edu