The concept of “Interrupt” is purely based on common-sense. To give you an example, imagine a parallel port in your PC being connected to a printer. During a print operation in progress, the CPU supplies characters to the parallel port (to be passed onto the printer) at a periodic rate. Say, the parallel port runs out of data after printing 1000 characters. Now the parallel port starts to “starve” for more characters. The processors job is to identify the hungry parallel port and feed it with more characters. Here comes the concept of “interrupt”.
“Interrupt” is a signal used by an I/O device (like a parallel port) to inform the CPU that it has to feed the parallel port with more characters. As soon as the interrupt signal (an output of the I/O device) is received, the processor devotes it’s attention towards the corresponding I/O device. In the above example, the CPU then does a “write” operation to the parallel port.
Similarly, an interrupt could be signalled by an I/O device (say a floppy disk-controller) indicating that it is completely filled with data and hence the CPU may initiate a “read” operation, to retrieve the data.
Usually, the microprocessors have only one input request line. Hence it is not possible to connect all the interrupt request lines from various I/O devices directly to this single input. They are rather connected to a device called interrupt controller.
How does the interrupt-signaling process occur?
Another Real Machine: The DEC PDP-8
As the Digital Equipment Corporation PDP-8 computer, having originated with a very simple instruction set, was also inexpensive enough that it was the first computer within the reach of many organizations, it holds a place in the affections of many.
Of course, a number of other computers might also have been noted as making computing more affordable, such as the IBM 650, the Bendix G-15, the Packard Bell 250, and the Royal McBee LGP 30, for example. There was more to the enduring affection in which the PDP-8 is held than its affordability, but that is a phenomenon I will not attempt to analyze at length.
The diagram below shows an overview of the instruction formats available with the PDP-8 and related computers throughout their history:
The first column shows the basic instructions included with the PDP-5 and all PDP-8 models. The opcodes for memory-reference instructions were:
000 AND And
001 TAD Two’s Complement Add
010 ISZ Increment and Skip if Zero
011 DCA Deposit and Clear Accumulator
100 JMS Jump to Subroutine
101 JMP Jump
An assembly language is a low-level language for programming computers. It implements a symbolic representation of the numeric machine codes and other constants needed to program a particular CPU architecture. This representation is usually defined by the hardware manufacturer, and is based on abbreviations (called mnemonics) that help the programmer remember individual instructions, registers, etc. An assembly language is thus specific to certain physical or virtual computer architecture (as opposed to most high-level languages, which are usually portable).
Assembly languages were first developed in the 1950s, when they were referred to as second generation programming languages. They eliminated much of the error-prone and time-consuming first-generation programming needed with the earliest computers, freeing the programmer from tedium such as remembering numeric codes and calculating addresses. They were once widely used for all sorts of programming. However, by the 1980s (1990s on small computers), their use had largely been supplanted by high-level languages, in the search for improved programming productivity. Today, assembly language is used primarily for direct hardware manipulation, access to specialized processor instructions, or to address critical performance issues. Typical uses are device drivers, low-level embedded systems, and real-time systems.
A utility program called an assembler is used to translate assembly language statements into the target computer’s machine code. The assembler performs a more or less isomorphic translation (a one-to-one mapping) from mnemonic statements into machine instructions and data. (This is in contrast with high-level languages, in which a single statement generally results in many machine instructions. A compiler, analogous to an assembler, is used to translate high-level language statements into machine code; or an interpreter executes statements directly.)
Many sophisticated assemblers offer additional mechanisms to facilitate program development, control the assembly process, and aid debugging. In particular, most modern assemblers (although many have been available for more than 40 years already) include a macro facility (described below), and are called macro assemblers.
Typically a modern assembler creates object code by translating assembly instruction mnemonics into opcodes, and by resolving symbolic names for memory locations and other entities. The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitution—e.g., to generate common short sequences of instructions to run inline, instead of in a subroutine.
An instruction set is a list of all the instructions, and all their variations, that a processor can execute.
– Arithmetic such as add and subtract
– Logic instructions such as and, or, and not
– Data instructions such as move, input, output, load, and store
– Control flow instructions such as goto, if … goto, call, and return.
An instruction set, or instruction set architecture (ISA), is the part of the computer architecture related to programming, including the native data types, instructions, registers, addressing modes, memory architecture, interrupt and exception handling, and external I/O. An ISA includes a specification of the set of opcodes (machine language), the native commands implemented by a particular CPU design.
Instruction set architecture is distinguished from the microarchitecture, which is the set of processor design techniques used to implement the instruction set. Computers with different microarchitecture can share a common instruction set. For example, the Intel Pentium and the AMD Athlon implement nearly identical versions of the x86 instructions set, but have radically different internal designs.
This concept can be extended to unique ISAs like TIMI (Technology-Independent Machine Interface) present in the IBM System/38 and IBM AS/400. TIMI is an ISA that is implemented as low-level software and functionally resembles what is now referred to as a virtual machine. It was designed to increase the longevity of the platform and applications written for it, allowing the entire platform to be moved to very different hardware without having to modify any software except that which comprises TIMI itself. This allowed IBM to move the AS/400 platform from an older CISC architecture to the newer POWER architecture without having to recompile any parts of the OS or software associated with it. Nowadays there are several open source Operating Systems which could be easily ported on any existing general purpose CPU, because the compilation is the essential part of their design (e.g. new software installation).
Machine language is built up from discrete statements or instructions. Depending on the processing architecture, a given instruction may specify: Continue reading “A simple Computer Organization and Instruction Set part 2- IETE”
In the previous lesson we discussed about the evolution of computer. In this lesson we will provide you with an overview of the basic design of a computer. You will know how different parts of a computer are organized and how various operations are performed between different parts to do a specific task. As you know from the previous lesson the internal architecture of computer may differ from system to system, but the basic organization remains the same for all computer systems.
At the end of the lesson you will be able to:
Understand basic organization of computer system
Understand the meaning of Arithmetic Logical Unit, Control Unit and Central Processing Unit
Differentiate between bit , byte and a word
Define computer memory
Differentiate between primary memory and secondary memory
Differentiate between primary storage and secondary storage units
Differentiate between input devices and output devices
2.3 BASIC COMPUTER OPERATIONS
A computer as shown in Fig. 2.1 performs basically five major operations or functions irrespective of their size and make. These are 1) it accepts data or instructions by way of input, 2) it stores data, 3) it can process data as required by the user, 4) it gives results in the form of output, and 5) it controls all operations inside a computer. We discuss below each of these operations: Continue reading “A simple Computer organization and instruction set part 1- IETE”