Transcript
  • 8/20/2019 Ders Notlari bilgi teknolojileri

    1/47

    1

    SECTION 1 - Computing Today

    Historical Thoughts

    While the activity of counting objects and remembering things extends back to the earliest times ofhumans, the idea of a mechanical device that could aid in the counting process, or that could actually

    do the counting, is relatively recent. There are individual examples of record-keeping devices inhuman history, but these are few: the Quippu of the Incas and the Abacus of ancient China. TheGreeks, Romans, Persians, and many other antique cultures used variations of writing for the keepingof records or the remembering of partial answers in computations - for instance, wax tablet andstylus, clay tablets, papyrus and ink, and, later, paper. But the idea of a machine that could actuallydo the work of computing, rather than simply aiding the human in doing the thinking, dates only tothe late 1700's.

    The computer as we know it today is the product of several major eras of human's technology.Technology is the application of tools and techniques such as to improve the likelihood of humansurvival. In addition to the survival aspect, the use of tools and techniques to solve non-essential, butstill needed or interesting problems, has given rise to many great inventions. These include thingslike the automobile, bicycle, radio, etc. The evolution of the computer lists these phases ofdevelopment:

    1. The Mechanical era, in which the "industrial revolution" provided the mechanical techniquesand devices needed to build machines of any sort;

    2. The Electronic era, in which the use of electrical devices and techniques obsolescedmechanical methods, and the

    3. Semiconductor era, in which the relatively new science of semiconductor physics andchemistry extended the original ideas of the Electronic era to new heights of performance.

    A few events will illustrate these time frames:

    • 1780: Blaise Pascal designs and builds a decimal counting device that proves that mechanicalcounting can be done.

    • 1790's: Jacquard devises a weaving loom that uses a chain of punched cards to define thefunctions of the shuttles through the warp, thereby defining the color pattern and texture ofthe cloth.

    • 1822: Charles Babbage presents his Difference Engine to the Royal Society, London, wherehe demonstrates the mechanical computation of Logarithms.

    • 1833: Charles Babbage presents his Analytic Engine to the Royal Society; it is nevercompleted.

    • 1830's: Ada Augusta , daughter of George, Lord Byron (English poet) works with Babbage todevelop a schema for laying out the logical steps needed to solve a mathematical problem,

    and becomes the first " programmer ". She also invests most of her husband's money in ascheme to beat the horse races built on Babbage's work - they loose their shirt.• 1860's: "The Millionaire", a machine that could multiply by repetitive addition is announced.• 1888: In response to an invitation from the US Census Bureau, Herman Hollerith presents

    his tabulating machines including a card punch, card reader, tabulator (electric addingmachine), and card sorter. He wins the contract for equipment for the 1890 census.

    • 1900's: Hollerith's machines are a success, and he sells them to countries for census, and tocompanies for accounting. His patents are stolen by competitors. He joins with two other

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    2/47

    2

    companies to form the Computing-Time Clock-Recording (CTR) company (tabulatingmachines, time clocks, and meat scales).

    • 1914: The CTR company hires Thomas J. Watson, Sr ., as president. His job is to beat thecompetition and put the outfit on the map. He immediately starts training for sales persons oncompany time, and renames the company to International Business Machines Corporation(IBM).

    • 1920's: IBM and several others manufacture ever-more complex electro-mechanicaltabulating equipment. The stock market crash of September, 1929, puts millions out of workand many companies fold. IBM reduces its activities, but never lays anybody off. They hiremore salesmen.

    • 1934: President Franklin Roosevelt institutes the Social Security Act, which requires thateverybody in the country have a number. Great quantities of tabulating equipment are

    purchased to support this effort.• 1939: Vannever Bush demonstrates the Differential Analyzer , the last great purely

    mechanical calculator.• 1941: The United States enters World War II. Most companies refit for the manufacture of

    munitions.• The War Years: Many new advances are made in electronics that will have effect on the

    tabulating business after the war, including radio, radar, sonar, and television.• 1944 : J. Presper Eckert and John Mauchley are given a contract to develop a purely

    electronic calculator for the calculation of bomb trajectories. The build the Electronic Numerical Integrator and Calculator ( ENIAC ) at the Moore School of Engineering,University of Pennsylvania.

    • John Von Nuemann produces the Electronic Differential Storage And Computer ( EDSAC )for the Institute of Advanced Studies, the first machine to use the stored program concept.

    • 1949: The transistor is invented by Schockley et al.• 1951: A company started by Mauchley and Eckert to build electronic computers goes broke

    and is bought by Sperry Rand. With this help, the two deliver the first Universal Calculator(UNIVAC ) to the US Census Bureau, the first computer sold for commercial, non-military

    purposes.• 1955: IBM introduces the 704 series computers, the first large-scale systems using transistors.• 1958: IBM introduces the 1401 and related systems, bringing card-based data processing to

    the average company.• 1964: IBM bets the company on the introduction of the System/360 , using microtransistors

    and mass-produced core storage devices, and the idea of the "non-dedicated",microprogammed system. The product line is upward-compatible - a huge success thatultimately defines the mainframe market.

    • Late 1960's: Several companies begin to develop and deliver true integrated circuits .• 1973: The Intel Corporation delivers the first integrated circuit capable of executing a fully

    usable program, the Intel 8080 . The microprocessor is born.• 1977: The Apple Computer Company is started by two college dropouts in their garage,

    Steve Jobs and Steve Wozniak. Originally sold in kit form, the machine uses inexpensive parts and the home color television to bring computing to the masses. The BASIC programming language used in the machine is written by Bill Gates of Microsoft .

    • 1981: IBM introduces the IBM Personal Computer , and coins a term the will live forever.At first aimed at the home market, the PC is immediately adopted by businesses large andsmall. Since the design of the system is published, many begin to write programs for themachine and to steal it's design. The use of the Intel 8088 processor ensures Intel's survival.Microsoft provides the Disk Operating System (DOS ).

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    3/47

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    4/47

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    5/47

    5

    Similarly, we have terminology for increments of time. Since the CPU is operating at such a highspeed, it is common to refer to very small increments of time needed for single operations. If asecond is the standard increment of time, then a millisecond is 1/1,000 of a second, a microsecond is1/1,000,000 of a second, and a nanosecond is 1/1,000,000,000 of a second. These amounts areincomprehensibly short for the human being whose average cycle time is about 1/18th of a second.However, computer circuitry regularly operates in these ranges. (Example: If an electron travels in

    free space at the rate of 186,256 miles per second, that is, at 300, 000,000 meters per second, how farwill it travel in one nanosecond?)

    Storage and Memory

    In addition to the entry and exit of data into and out of a CPU, the CPU contains many data paths,logical units, and functional parts. These will be discussed later. However, one of these parts has

    become so important in it's own right that it should be presented from the beginning in theintroduction of computer theory. Originally part of the CPU and discussed like any other, the part ofthe computer that stores data for processing and the results thereafter is both an essential part of thedesign and can also be one of the technical problems. This is referred to as memory or storage .

    There are two types of memory mechanism in typical modern computers. The first is the mainmemory , or that circuitry which is directly accessible automatically and at high speed by the rest ofthe circuitry of the processor. In the early days, this device consisted of cores or doughnut-shaped

    pieces of a magnetic ceramic material that were string like beads on a grid of wire. By passingcurrent through the wires, binary 1's and 0's could be stored and retrieved from the cores. Theseworked well but due to the laws of physics, there was a certain upward limit of performance thatcould be achieved. When semiconductor technology advanced to the point of making integratedcircuits practical, semiconductor memory devices were a major produce. You may be familiar withthese as SIPs or SIMMs or DIPs used in personal computers on their mother boards. The termmemory now generally refers to these semiconductor devices.

    The term storage originally referred to the magnetic core system discussed above. However, theword is now used primarily to discuss external data-holding mechanisms such as disk and tapedrives. In the old days, disk drives and tape drives were referred to as bulk or auxiliary storage. Wetend to think today of either floppy diskettes or Winchester-style small fixed disks or hard disks usedin personal computers as storage. These devices have revolutionized the way data are handled inmany computer systems, and a system can find itself dependent on its drives as the main definition ofthe computer overall performance.

    So, generally, the term memory refers to solid-state, printed-circuit board things, and storage to diskdrives and similar devices.

    Some terms involved with memory include:

    • Read-Write Memory (RWM), typically your motherboard main memory, into which acomputer can place data and from which that data can be later retrieved.

    • Read-Only Memory (ROM), typically also found on the motherboard, but which can only be read from, and not written to.

    • Random Access Memory (RAM), which is a term misused by those who really should sayRWM most of the time.

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    6/47

    6

    The term RAM really means that a device supports the ability of the computer to access data in arandom order, that is to store or retrieve bytes in a non-sequential way. Both RWM and ROM areRAM in nature. However, the acronym RWM is hard to pronounce, so RAM became the norm.

    Certain types of memory and storage devices can remember what they contain with or without the power being applied to the system. Such storage devices are called non-volatile . Magnetic disk

    drives and cores are good examples. Other types of memory need the power to remain stabile or theywill forget what they contain. The memory chips on a motherboard are a good example of this type,called volatile .

    More on Hardware

    In the good old days, you could tell whether a computer was a mainframe or a minicomputer bylooking at it and measuring the floor space the cabinets took up. Today, with so much computinghorsepower contained in such small devices, physical size is no longer a criteria. Today we measurecomputer size in throughput , that is, how many instructions can the system do in a given amount oftime. Depending upon the design of the system, we have computers which work in Millions ofInstructions per Second , called MIPS . Some computers that are designed primarily with scientific

    processing in mind do many Floating Point Operations per Second , referred to as FLOPS . Afloating point operation is one where the decimal point in a decimal fraction is taken into account andis included in the design of the numbers used by the computer. More on this later.

    The original idea of a minicomputer was that it was smaller, slower, and cheaper than a mainframe,which traditionally cost a great deal and required a lot of space and people to work it. Theminicomputer was almost "personal" in its design. This definition persisted until the advent of themicroprocessor, at which time we had the microcomputer , which contained a microprocessor as its

    primary computing element. As the microprocessor device progressed in capability, theminicomputer became obsolete and the size of the traditional mainframe began to shrink.Microprocessors are now to the point that they can do what minicomputers and small mainframes did

    just a few years ago. Accordingly we have table-top or table-side systems, floor standing systems,and laptop and palm-sized computers. We measure these by throughput and performance, regardlessof physical size.

    The term supercomputer is used to identify large mainframe systems that are designed for particulartypes of scientific calculations. These systems are designed to work with numbers at great speed, to

    prepare everything from weather maps from satellites to keeping track of aircraft in the sky. Theywill remain as a specialty item for that type of computing.

    Just as the design and inner workings of the CPU have evolved with technology, so have theinput/output devices evolved. In the beginning, the two primary forms of I/O were the reading oftabulating cards into which were punched holes in patterns or codes that represented numbers and

    letters. Although several such coding systems were devised, the Hollerith card code with 80columns and 12 rows of holes became the standard. The cards were read, the data processed, and theresult was either more punched cards or a simple printout of numbers and accounting data. Thesystem was limited with how fast the cards could be moved, and some early tab systems had nostorage at all.

    Currently we have a variety of I/O devices that have taken advantage of technology. While in the olddays, the conversion process from human-usable form (orders, waybills, etc.) to computer-usableform (punched cards) was an essential step in the process. Now, many items used by humans

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    7/47

    7

    everyday are also computer-usable, such as credit cards, touch panels and screens, scanning laser badge and tag readers, etc. Every school kid is familiar with a mouse and keyboard it seems, as theseare easy to use if the software is provided,

    Output today falls into two main categories, softcopy and hardcopy. Softcopy is what you see on thescreen. It is soft because you can't take it with you except in your mind and memory. The Cathode

    Ray Tube (CRT) display of the video monitor of the typical personal computer is a prime example.Although the CRT is being replaced with Liquid Crystal and Plasma displays, the venerable videomonitor is still the standard for video output. Hardcopy is a term used to indicate a piece of paper onwhich something is printed. This paper can contain the resulting data in the form of numbers andletters, pictures, and various other images, both in color and monochrome. The method of placing theimage onto the paper has evolved also. Originally, the impression of a piece of type that pinches aninked ribbon against paper was the common method, and is called impact printing . The process issimilar to a typewriter. We now can generate images using heat as in thermal printers, light as inlaser printers, and with improved methods of the traditional printing means, as in dot matrix andinkjet printers.

    Disk drives fall into the category of I/O as well as that of storage. Two types are currently in use incommon systems, the "floppy" or diskette, and the fixed or hard or Winchester disk drive. The floppyis an IBM invention, and originally was released in an 8-inch diameter. This large diskette couldstore 256,000 bytes of information on one side of the diskette. We now have diskettes in the 5.25-inch size (although this size is fast fading from view), and the 3.5-inch size, whose capacity isincreasing with technology. The floppy is designed for portability, backup , and small storage and issupported almost universally as a simple means of data exchange and retrieval

    The fixed or hard disk is also an IBM invention, and although many makers produce the devices,IBM holds the most design patents and has done the most to improve the capacity and reduce thesize. The size of the device has been reduced from 28" to 14" to 8" to 5.25" to 3.5" to 1.8" indiameter, the speeds have increased from 1500 rpm to 7500 rpm, and the chemistry used to store thedata as magnetic lines of force on the surface of the disk has undergone radical changes. Until suchtime as a solid-state device takes over the whole job of data storage, such drives will form the

    primary means of bulk data storage.

    More on Software

    Generally, software is divided into two categories. These are applications software and systemsoftware . Application software is the programs you would use to get a kind of work done. Examplesare WordPerfect as a word processor, Excel or Lotus as a spreadsheet, etc. These are software

    packages that are interacted with directly by the user or operator of the computer. Enormous work isdone to continually write programs for ever-larger applications programs. Programmers canspecialize in applications of a specific nature, such as for banking, etc.

    Systems software is used by the computer itself for its own management, or to support theapplication software. An example of this is the DOS used in personal computers. The computer itselfconsists of hardware and some small amount of programming in ROM, but to fully support anapplication such as WordPerfect, that is, to drive the screen images, work with the keyboard ormouse, save data on disks, and generate printouts, the application needs to ask DOS for help from thehardware. So the system software consists of the operating system itself, a wide variety of supportutility programs , and programming support in the form of compilers for different languages.

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    8/47

    8

    So, who does the programming? The first programmer is considered to be Ada Augusta Byron, andshe has a language (ADA) named after her. The first machine that could actually follow a programwas the EDSAC developed John Von Neumann in the late 1940's. Von Neumann developed what wenow call the stored program concept . While previous machines such as ENIAC simply took one

    piece of data at a time, processed it, and returned it to the world before taking a second piece, theEDSAC took in a large amount of data, processed it all automatically, and then returned the entire

    result. While ENIAC's problems that it was to solve were defined by miles of patchplug wiring thathad to be removed and inserted for each problem, EDSAC used the same mechanism for storage ofdata for the storage of coded steps that the machine was to follow automatically to process the data.Thus, each step became an instruction and the instructions together as a group formed a program .The program was stored in the same automatically accessible mechanism as the data it was to

    process. Arranging the steps that the computer will take to logically solve a program, in the mannerof Ada Byron, is called programming.

    The job of programming goes to people of different skill levels and experience. Some choose tospecialize in applications while others choose systems. Typically individuals become expert in some

    particular language or system architecture, and that will define their careers. Generally the beginning programmer, with community college or similar training, will start as a coding clerk where the primary function is mastering a particular computer language and getting the know the system in use.The programmer then will write programs to solve problems. In some cases the problem is big orcomplex enough that a specialist is needed to lay out the plan for the programmer and this person iscalled a system analyst . It should be noted just what a system analyst is. This is a person that is anexpert in some field such accounting, aircraft design, environmental sciences, etc., who also isknowledgeable in computers and how they can be used as a tool to solve problems. The analyst is thetechnical expert of the particular project, and also knows computers well enough to guide others inthe work of programming the various project parts.

    An end user is a person who is the last one in the food chain in the writing and marketing ofsoftware. If you use WordPerfect to write a term paper, then you are the end-user as far as WordPerfect is concerned. As such, you have significant importance and clout. End users can dictate to acertain extent what products survive and what products fail. The acceptance of WordPerfect in themarketplace is a classic example of a product that was at the right place at the right time and caughtthe public's fancy.

    Specialty Items

    Here are a couple of terms with which you can astound your friends and family.

    • Multitasking is defined as the ability of a computer system to execute what appears to beseveral programs at the same time. Although this is not really what the computer does, it doesswitch between tasks so fast that it appears to be several computers instead of just one.

    Windows 2000 and XP and Unix are multitasking systems. It's a hardware/softwarecombination.• Timesharing is the apparent use of a system by several people at the same time. The classic

    example is a mainframe or large minicomputer into which many terminals with screens andkeyboards are connected to the CPU. The system gives each person at a terminal a slice oftime (" timeslicing ") for their processing, after which the system moves on to the next user forhis/her timeslice. This technique makes use of multiprogramming as it attempts to serve allthe attached users who may be doing different things.

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    9/47

    9

    • Front-end Processors are computers that do initial processing and data form conversion before sending the concentrated data to a bigger faster system. Examples include asupercomputer that accepts input only from a machine that builds problems for it to solve, ora communications processor that provides a filtering for communications protocols that wouldonly slow down the primary system.

    • Embedded Processors are computers or microprocessors embedded within a larger system.

    These provide intelligence and control at a local level. Flight control computers with anaircraft cockpit are examples, or the processor that controls the firing of sparkplugs in anautomobile engine.

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    10/47

    10

    SECTION 2 - Hardware, Part 1 Numbering Systems and Codes

    When working with computers it is necessary to deal with numbering systems other than the decimal system. While the decimal system has served mankind well for thousands of years, it is not easily

    adapted to electronics. The primary numbering system used in digital systems is binary , with theoctal and hexadecimal systems going along for the ride.

    The reason for the use of the binary system is that each position of magnitude can have only two possible values, 0 and 1. It happens that in the laws of physics and nature the two-state condition iseasiest to implement. Switches can be open or closed; current can be flowing or not flowing; currentcan be traveling left to right, or right to left within a wire; magnetic lines of force can be clockwise orcounterclockwise around a core; lamps can be on or off. Computers make great use of circuits called

    bistable multivibrators, or flip -flops , that are stabile in two different electrical conditions. So, it isextremely easy to implement the binary system in electronic devices.

    By contrast, the decimal system can have ten values or symbols in each position: 0, 1, 2, 3, 4, 5, 6, 7,8, and 9. It would require a device with ten different stabile states to directly implement a purelydecimal computer.

    The binary system is based on the powers of the number 2, starting with 2 0, which is equal to thenumber 1 in decimal (all variables raised to the 0th power are equal to 1). The next order or positionof magnitude is 2 1, equal to 2 (all numbers raised to the first power are equal to themselves). Thesame applies for the higher powers of two: 2 2 = 4, 2 3 = 8, 2 4 = 16, 2 5 = 32, etc. Notice that thedecimal equivalents of the binary powers seem to be doubling as the value of the power increases by1. A table of the first 16 powers of 2, which we will use often, would look like this.

    215 214 213 212 211 210 29 28 27 26 25 24 23 22 21 20

    32768 16384 8192 4096 2048 1024 512 256 128 64 32 16 8 4 2 1

    If you add all of the values up in the second row of the table the total comes to 65,535, and, if youinclude the situation of origin 0 you have 65,536 as the largest 16-position number in binary.

    Each 1 or 0 that can occur in a binary number position is called a bit , which is short for Binary Digit .Since we can have a 0 or a 1 in each of the binary power positions, we call these bit positions , andname them after the power of two for that position. So, on the extreme right of the table, we have a

    position that represents 2 0, and we call it "bit position 0". The next position to the left represents 2 1,so we call this "bit position 1". Similarly, at the far left end of the table we have "bit position 15".The use of the term "bit position" becomes important in programming and in dealing with computer

    hardware-software interaction, where the bit positions represent locations with an 8-bit byte or 16-bitword, and we are interested in the whether a specific bit position contains a 1 or 0.

    When dealing with actual numeric values, it is convenient to understand the relationship betweendecimal and binary. From the table you can see that there is a decimal equivalent value for each bit

    position that corresponds to the power of two for that position. If we wish to convert a binary numberto decimal, we simply add up all the decimal equivalents for those bit positions that contain binary1's, and ignore those that contain 0's.

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    11/47

    11

    BIT POSITIONS 7 6 5 4 3 2 1 0

    DECIMALEQUIVALENTS 128 64 32 16 8 4 2 1

    9 0 0 0 0 1 0 0 1

    76 0 1 0 0 1 1 0 0

    176 1 0 1 1 0 0 0 0

    135 1 0 0 0 0 1 1 1

    TO CALCULATE THE DECIMAL VALUE OF A BINARY NUMBER, add together the decimal valuesof all the bit positions that contain binary 1's.

    TO CALCULATE THE BINARY VALUE OF A DECIMAL NUMBER,

    1. By inspection, determine the largest power of two decimal equivalent that will successfully besubtracted from the given decimal value (successful means that the subtraction returns ananswer that is either positive or equal to zero).

    2. Subtract the decimal equivalent of this power from the given number. Keep record of thissuccessful subtraction by placing a 1 in the bit position of that binary power.

    3. Now try to subtract the next smaller power of two from the result of step 2. It may be too big;if so, place a 0 into the bit position for this power of two. If the subtraction is successful,

    place a 1 into that bit position.4. Continue as in step 3 until all the given decimal number is used up. You should end up at 2 0

    power.

    The binary system is the basic method of all computer counting, but it generates numbers that become very wide very fast, and this leads to human error. Two other number systems have beenused to generate a shorthand that makes the handling of larger numbers easier than with pure binary.

    These are Octal and Hexadecimal .

    The octal number system is based on the number 8. There are 8 symbols possible in each magnitude position, 0, 1, 2, 3, 4, 5, 6, and 7. This system was widely used in earlier computers starting withENIAC, but has been replaced by the hexadecimal system for the most part. This system is based onthe number 16, and that means that there are 16 different symbols and values that can be placed ineach magnitude position. These are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F. Note that we useletters when we run out of the decimal numbers.

    Unlike the decimal system, the octal and hexadecimal systems work beautifully with the binarysystem because their natural place to carry into the next higher magnitude position concurs with thatof the binary system. Look at the locations where the octal number or the hexadecimal number rolls

    to a higher position, and you will see that it is in the same place as binary. Decimal values, however,do not carry at the same place. Therefore, there is a direct correlation between binary and octal orhexadecimal, but not between binary and decimal. This is why the decimal numbers entering acomputer are usually immediately changed to a binary or hexadecimal value, worked with in thatway by the program, and the answers returned to decimal just before they are returned to the outsideworld.

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    12/47

    12

    BINARY OCTAL HEXADECIMAL DECIMAL

    24 23 22 21 20 81 80 16 1 16 0 10 1 10 0

    16 8 4 2 1 8 1 16 1 10 1

    0 0 0 0 0 0 0 0 0 0 0

    0 0 0 0 1 0 1 0 1 0 10 0 0 1 0 0 2 0 2 0 2

    0 0 0 1 1 0 3 0 3 0 3

    0 0 1 0 0 0 4 0 4 0 4

    0 0 1 0 1 0 5 0 5 0 5

    0 0 1 1 0 0 6 0 6 0 6

    0 0 1 1 1 0 7 0 7 0 7

    0 1 0 0 0 1 0 0 8 0 8

    0 1 0 0 1 1 1 0 9 0 9

    0 1 0 1 0 1 2 0 A 1 00 1 0 1 1 1 3 0 B 1 1

    0 1 1 0 0 1 4 0 C 1 2

    0 1 1 0 1 1 5 0 D 1 3

    0 1 1 1 0 1 6 0 E 1 4

    0 1 1 1 1 1 7 0 F 1 5

    1 0 0 0 0 2 0 1 0 1 6

    1 0 0 0 1 2 1 1 1 1 7

    Coding Schemes: Given that data are stored or passed along inside a computer as binary bits, it soon became obvious that a method of organizing the bits into groups to represent letters, numbers, andspecial characters was needed. Although the process of calculating with binary digits is at the root ofthe design of the system, a great deal of data are represented as letters, not numbers. Therefore,several codes have been developed over the years to deal with letters and special characters.

    Baudot Code is named after George Marie Baudot, who was a signalman in the French Army. Hedeveloped a five-bit code named after him that was the standard method of sending data betweenteletype machines as these became available. A teletype is essentially a mechanical typewriter andkeyboard connected to a similar unit at a remote distance, and which communicated with each other

    by sending telegraph-like bit patterns over telegraph lines. Operating the keyboard on one terminalwould send five-bit groups over the wires where they would actuate the typewriter of the remote

    terminal. This technique was standard for the Western Union telegraph service and others during thefirst half of the century.

    American Standard Code for Information Interchange (ASCII) is a code that grew out of theexpansion of digital devices that needed to communicate, but which needed a greater range ofrepresentabile characters than Baudot code could provide. This code was developed and agreed upon

    by a consortium of companies that acknowledged the need for competitors to be able tocommunicate. The code comes in two versions, ASCII-7 and ASCII-8. ASCII-8 is little used now,

    but finds use in communications overseas where technology may not be to 1999 levels. ASCII-7 is

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    13/47

    13

    the code most used by minicomputers and microprocessor-based machines, including personalcomputers. The seven bits of each byte used can represent 128 different combinations of printableand non-printable characters, these last used to control equipment rather than to print on them. It hasmany of the earmarks of the earlier Baudot code in that it grew out of the teletype paradigm. Thecode can represent both upper and lower case letters as well as numbers. However, the organizationof the bit patterns that represent the numbers are not binary-aligned (similar to the problem with

    decimal discussed above). Therefore, a translation function is almost always included in programming where the data are brought into the system or sent from the system as ASCII, but isused in computations as pure binary.

    IBM PC ASCII is essentially the ASCII-7 code, and the lower 128 bit combinations are used in PCI/O devices such as screens and printers as usual. However, in an 8-bit byte, the 7-bit code leaves a

    bit unused. In some systems, this can be used as a parity bit or ignored. However, IBM elected, whenthe PC was designed, to use the 8th bit to double the ASCII-7 code range (remember, adding another

    bit position to a binary number doubles the number of combinations you have). The new 128characters thus created were used by IBM for characters for non-English languages, Greek letters andsymbols for mathematics and science, and simple graphics to form borders and grids on screendisplays. Although computer graphics has gone long beyond this phase of display, the PC ASCIIcode is used as a reference for simple text work.

    Extended Binary Coded Decimal Interchange Code (EBCDIC) was first used by IBM when theyintroduced System/360 in 1964. This code uses all 8 bits of the byte for 256 combinations thatrepresent letters, numbers, and various control functions. The decimal equivalent numbers are binaryaligned such that EBCDIC numbers coming from an input device can be directly fed to the processorand into computations without translation. The code is based on an earlier Binary Coded Decimal(BCD) , which was the code used in earlier IBM products such as the 1401. This code was a 6-bitcode in which the decimal numbers that were to be involved in calculations were binary-aligned.

    There are a variety of other coding systems, and the internal workings of each processor uses the bitsof bytes and words in many different ways. Watch for variations on these themes.

    Error Checking is used extensively in computers to make sure that the answers you are getting arecorrect. The validity of the data and the results of the computations overall is referred to as DataIntegrity . Are the answers really correct? Error checking can be done with hardware and software.Usually a system has several different implementations of both to ensure integrity.

    Vertical Redundancy Checking (VRC) or Parity Checking is a means of counting bits that are setto a 1 in every byte or word of a data stream. Suppose a magnetic tape drive is reading a record oftape. The bits of the bytes of data on the tape stretch across the tape width (vertically), not along thetape (horizontally). As each byte reaches the processor, the number of 1's in it are counted. We arenot interested in the binary value of the byte, but rather whether or not an even or odd number of bit

    positions contain 1's. In an Odd Parity system, we want the number of 1's to be odd, that is one bit,three bits, five bits, etc. If we count them and find that there are an even number of bits set to 1 (no bits, two bits, four bits, six bits), we turn on a special bit called the Parity Bit or Check Bit to ensurethat the number of bits in the byte is odd. If we find that the number of bits is already odd, we leavethe parity bit off. So, we have a bit that goes along with every byte that contains no value, but is usedto ensure parity. An Even Parity system works similarly, except that we use the parity bit to ensurethat the total number of bits in the byte is even.

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    14/47

    14

    Longitudinal Redundancy Checking (LRC) is a similar checking system that counts the numbersof bits set to 1 horizontally along the data stream. We are not interested in each byte, but rather in oneof the bit positions of all bytes, say bit position 3. As bytes pass by, we add up all the bits in bit

    position 3 that pass. We do this also for the other bit positions as well. At the end, we have acharacter that represents the summation of all bits in each bit position for that data stream. Thischaracter, called the LRC Character is written on the tape or follows the data transmission from the

    sending end to the receiving end. As the data arrived at its destination, a similar LRC characters wasgathered. The two are compared, and if they match we assume that the transmission or tape readingwas OK. If they do not match, we assume a reading or transmission error.

    The problem with both of these methods is that if an even number of bits is picked up or droppedduring the transmission or reading of the media, it is possible for errors to go undetected. Therefore,on magnetic recording systems and some networking, a Cyclic Redundancy Check (CRC) is used.The CRC check character is gathered similarly to the LRC character above, but it is processed by ashift-and-add algorithm rather than be simple addition. The result may be more than one checkcharacter in length. When all three check methods are used and no errors are found, the assumption isthat the data are clean.

    The Central Processing Unit

    The Central Processing Unit (CPU) is the heart of the computer system. It contains all the circuitrynecessary to interpret programs that define logical processes that the human programmer wants to do.It consists primarily of electronics which implement logical statements. These statements are workedout in Boolean Algebra, a non-numeric logical algebra that defines the logical relations of values toeach other.

    The CPU is responsible for the interpretation of the program, and, following the instructions in the program, causes data to be moved from one functional unit to another such that the results desired bythe programmer are obtained. Input data are given to the CPU, it is processed by being moved aboutwithin the CPU's functional units, where it undergoes logical or numeric changes along the way.When the processing is done, the data are returned to the human world as output data.

    There are historically two designs that have been used in CPU's. The first dates from the time of JohnVon Neumann, and may be referred to as a "dedicated system". This system has circuitry that isdedicated to specific purposes - an adding circuit that does addition, a subtracting circuit that doessubtraction, a circuit that only compares, and so on. None of the circuits are active except the one thatis needed at the moment. This is wasteful of circuitry and makes the system larger and require more

    power.

    The second type of system appeared commercially with the advent of IBM's System/360 in 1964.This system may be defined as "non-dedicated". The individual circuits needed for discrete functions

    in the earlier machines were replaced by a single multipurpose circuit that could act like any of themdepending on what it was told to do. This circuit was called the Arithmetic Logic Unit (ALU) . Itcould act like an adder, a subtractor, a comparator, or any of several other functions based on what itwas told to do.

    A block diagram of a modern CPU includes the following functional units :

    • Registers : Registers are groups of circuits called bistable multivibrators, or flip-flops , forshort. These are circuits made of pairs of transistors that have the ability to remain stabile in

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    15/47

    15

    one of two logical states. They can be said to contain a binary 0 or 1 at any specific time.Groups of flip-flops can be used to store data quantities for a short period of time within theCPU; 8 flip-flops could store one byte, and 16 could store one word.

    • General Purpose Registers (GPR) are groups of flip-flops that act to hold bytes or words fora period of time. Unlike most registers in the system, these registers are visible to the

    programmer as he/she writes the instructions to implement the program. The programmer can

    refer to these registers in the program and put data into, and take data out of, them at anytime.

    • The Arithmetic Logic Unit (ALU) : This unit has responsibility for all of the arithmetic andlogical functions of the system. It is composed of one fairly complicated circuit that can actlike any of several types of mathematical or logical circuits depending on what it is directedto do. This device has no storage capability, that is, it does not act like a register or memorydevice. It introduces a small delay as the data passes through it, called transient response time.

    • The Instruction Register receives the incoming instruction and holds it for the duration of amachine cycle or longer. It makes the instruction available to the system, particularly theControl Unit.

    • The Program Counter is a register that keeps track of the location of the next instruction to be processed after the current one is finished. It contains memory addresses in binary form.

    • The Control Unit (CU) accepts the instruction from the Instruction Register and, combiningthe instruction with timing cycles, causes the various functional units of the CPU to act likesources or destinations for data. The data moving between these sources and destinations may

    be processed on the way by moving through the ALU.• System Clock : The system clock is a timing cycle generator that creates voltage waves of

    varying periods and duration which are used to synchronize the passage data betweenfunctional units.

    • Input/Output System : This system provides the means by which input data and instructionsenter the system, and output data leaves the system. Remember that in a Von Neumannmachine, the data and the instructions that direct its processing sit side-by-side in the samememory device.

    • Main Memory is contained within the CPU, and stores the data and instructions currentlyneeded by the program execution. The speed with which the memory and the rest of thesystem communicate is a critical issue, and the center of much development. This may also becalled Primary Storage .

    Here is more detail on some of these items.

    The Control Unit has undergone major design changes over the years. The current procedure is tomake the CU essentially a computer within a computer. Just like the CPU has I/O devices betweenwhich it can move data, the CU treats the functional units of the CPU as sources and destinations.The CU takes the instruction from the Instruction Register and the timing cycles from the SystemClock. It combines these by stepping through what amounts to be microinstructions contained

    within its own circuitry. By following the microinstruction pattern built into itself for a giveninstruction, the CU implements the instruction desired by the programmer by moving data betweenand through the various functional units in step with the system clock. The effect is one of doing therequired instruction as far as the outside world can see.

    An example would be the process of executing an Add instruction. The programmer writes an Addinstruction along with additional information such as where the two data items are in the system thatare to be added. Given this as a starting point, the CU starts to follow its own set of microinstructionsto find the two data items, pass them through the ALU to accomplish the Add, and catch the sum at

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    16/47

    16

    the output of the ALU. It then returns the sum to a functional unit such as a register to hold theanswer for the next instruction.

    Because the sequence of events in the earlier dedicated systems operated at the binary level, and because the programmer and technician originally could work directly with the circuitry either fromthe front panel via lights and switches or via a program, the lowest or binary level of programming

    became known as Machine Language (ML) . With the advent of the microprogrammed ControlUnit, the instructions contained within the CU became known as microprogramming or microcode .This means that currently, the Machine Language which is the lowest level that the programmer cansee is made up of microprogram instructions or steps. The technician can work at the microprogramlevel, but the programmer typically would not. When programming logic and instructions areembedded permanently into the circuitry of a device, it is referred to as firmware .

    The System Clock is a timing signal generator that creates a variety of voltage waveforms used tosynchronize the passage of data through the functional units. There are two types of electronics orlogic in the system, synchronous and asynchronous . The word synchronous means "in step with the

    passage of time", while asynchronous means " not in step with the passage of time. Synchronouscircuitry is that which has a clock timing signal of some kind involved with it. GPRs, for example,are synchronous, because the accept data into themselves at a particular moment or clock time. TheALU is an asynchronous circuit - it doesn't store data, it passes it through as quickly as possible anddoes not rely on a clock signal to do so.

    In synchronous systems, the system timing is divided into regular intervals of time called MachineCycles . All system activity is based on the elapse of the machine cycles. These are further roughlydivided into two types of cycles. These are Instruction Cycles , or I-Cycles , and Execution Cycles ,or E-Cycles . Instruction cycles are those that are responsible for obtaining an instruction from themain memory, placing it into the Instruction Register, and starting the CU's process of analyzing themachine language instruction to determine what microprogram to execute. By the time the I-Cyclesare completed, the instruction is ready to execute, and the system already knows where the nextinstruction to execute will be found in main memory after the current instruction is completed. E-Cycles have the responsibility of actually causing the instruction to be accomplished. It involves aseries of microinstructions that move data around between the functional units of the system so thatthe desired result is achieved. E-Cycles must recognized when the instruction has run to completion,and hand off the system to the I-Cycles again for the next instruction.

    In modern systems, including those based on the microprocessor device, these cycles can beoverlapped. The I-Cycles for instruction number 2 are getting underway while the E-Cycles forinstruction number 1 are being performed. This is a simple example of parallel computing .

    Main memory or primary storage is tightly connected to the dataflow of the CPU. It is a primarysource for instructions and data needed for program execution and a primary destination for result

    data. The programmer really has little else to specify other than a main memory location or a GPR forthe most part.

    Data are stored in main memory at locations called addresses . Each address can contain one or more bytes of data. If the smallest lump of data that can be referred to with a single address is a byte, thenthe machine is referred to as byte-addressable . If the smallest lump of data that a single address canrefer to is a two-byte word, then the system is called word-addressable . Some special purposedevices can use an address to refer to a single binary bit within a byte in the memory. Thesemachines are called bit addressable .

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    17/47

    17

    The number of addresses and therefore the number of storage locations a memory system can have isdetermined by the width of the address bus of the CPU. This total number of addresses is called theaddress space . The address bus is a set of parallel wires that distribute binary 1's and 0's to thememory system in a synchronous manner. Each additional bit of width given to the address busdoubles the size of the memory possible. An address bit one bit wide, A0, could specify one of twoaddresses, number 0 and number 1 (remember there are two states possible for a binary bit). If the

    address bus were two bits wide, A0 and A1, then there would be four addresses in the memorysystem because there are four possible combinations of two bits: A0=0, A1=0; A0=1, A1=0; A0=0,A1=1; and A0=1, A1=1. If we have three bits of address bus, A0, A1, and A2, then we would have 8addresses possible, and so on. Following this plan, what would be the address space for address buswidths of 16, 20, 24, and 32 bits?

    The interaction of the main memory to the rest of the CPU is a critical factor in the overall performance of a computer. Typically, when core storage was used, the speed of the core system wasslow enough compared to the speed of the electronic circuitry that the electronics had to wait for thememory to respond to a request. The CPU would add machine cycles of wasted time, (inmicroprocessors, called wait states ), to slow the circuitry down and give the memory time torespond. With the advent of microprocessors and solid state memory, we still have this problem

    because the speed of the microprocessor device is still significantly greater than that of the mainmemory connected to it. We overcome this problem by the addition of a cache memory . The cacheis a small amount of high speed memory that is able to keep up with the processor with no waiting. Itinterfaces the processor at its speed to the main memory at the slower speed. This is tricky to do, andthere are variety of cache controller devices and methods that are currently in use to make this

    process as efficient as possible. With a good cache controller, it is possible for the memory to havethe needed data or instruction information available to the processor about 99% of the time. This iscalled a hit rate .

    Currently, there are two fields of processor design of which you should be aware. The first is calledthe Complex Instruction Set (CISC) approach. This is the traditional mainframe approach, and theSystem/360 was famous for it. The CISC machine uses complex instructions to do its work. Oneinstruction might cause one number to be incremented, another to be decremented, the two resultscompared, and a change in execution direction (jump or branch) depending upon whether the twonumbers are equal. That is an great amount of work for one instruction to do, but it is fairly easy toimplement since by the use of microprogramming, it is an easy task to simply connect microroutinestogether to accomplish it. Such systems can be rather slow in execution, but are easy to program

    because the have what we call a "rich" instruction set. Most microprocessors including the Intelmachines are of this type.

    Reduced Instruction Set (RISC) computers are designed in the reverse. These machines have asmall number of simple instructions, but they execute very, very fast. Their electronics is hardwiredor dedicated, as opposed to microprogrammed. The results of the complex instructions can be

    obtained by writing routines that implement the logic of the complex instruction using the smallinstructions. The result is that a RISC machine can execute at an overall faster rate, even though itseems to be doing more instructions to get the result. Various tricks with clock distribution, internal

    pipelining, and similar approaches are also used in the RISC design to further improve thethroughput. RISC machines are finding use as large workstations for CAD, design, engineering, andrelated uses.

    In a CISC machine, the ultimate throughput depends on how fast the ALU can be supported by therest of the circuitry. Indeed, no matter how fast the support electronics are, if the machine has only

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    18/47

    18

    one ALU, then it can execute only one instruction at a time. Parallel Processing involves a design inwhich there may be more than one ALU. This would allow more than one thing to be processed atone time, thereby increasing the performance of the system. The parallelism is not limited to just theALU. It is possible to have more than one set of GPR's, data paths, and I/O paths as well. The

    primary difference between the Intel '386, '486, and Pentium is their internal architecture that usesever-increasing amounts of parallel functional units to increase throughput.

    Another method of increasing throughput is by use of a Coprocessor . This device acts as a parasiteon, and in concert with, the main processor. It cannot operate by itself. It uses bus access and controlsignals that are common to the primary processor at will. The coprocessor is designed to do a specificset of small tasks, but to do them very fast. The best example is the 8087 math coprocessor fromIntel, that works in concert with the 8086 processor. The 8087 has an additional set of instructionsthat it can perform that is over and above the instruction set of the main processor. As the instructionsenter the processor and coprocessor together, the 8087 watches for one of the instructions that

    belongs to its set. When such an instruction comes along, the 8086 hands control of the system overto the 8087 for it to do its thing. When the instruction or instruction stream is complete, the control

    passes back to the 8086 again. The 8087 can deal with floating point numbers and very largenumbers that would take the 8086 much longer to process.

    Peripheral Devices, Character-based

    Peripheral devices are those that support the processor to deliver data to the processor, take resultsaway, or store data and instructions so that they can be accessed by the processor at any time. In thissection we will discuss those peripheral devices that are primarily character-based, that is, they dealwith data one character or byte at a time.

    Source documents are those documents that come from the human world to the computer. They can be order sheets, sales tags, handwritten receipts, or an infinite number of similar things. They areempirical; that means that they are gathered at the source of the related activity, which may be milesfrom the nearest PC. Computer-usable documents are those pieces of paper or media that can beaccessed by the computer's input/output devices without need for further preprocessing. Theseinclude the venerable punched card, optically read documents, magnetic stripe credit cards, orkeyboard entry. In the old days, there was a major conversion that had to occur to make the sourcedocuments computer-usable. Traditionally, the source documents were brought to the computer sitewhere they were read by a keypunch operator who generated a deck of punched cards thatrepresented the data on the source items. This step consumed time and money. Therefore, a greatvariety of data entry techniques have been developed to eliminate the translation process. Creditcards with magnetic stripes, optically read lotto tickets, laser scanned canned goods and potato chip

    packages name just a few.

    Early methods of generating computer-usable documents centered around punching holes in things.

    These included the Hollerith punched card and paper tape, which was used in teletype systems andvarious early data recording methods. The punched card had twelve rows for holes named 12, 11, 0,1, 2, 3, 4, 5, 6, 7, 8, and 9, from the top down. The card was divided into 80 columns, left to right.The top three rows were called zones, and the bottom rows were called digits. If the area or field, orgroup of columns, of the card being discussed contained numeric data, then the 0 row wasconsidered a number 0. If the field of the card being discussed contained alphabetic or alphanumeric information, the 0 row was considered a zone. The three zones could represent thirds of the alphabet,so that a punched card could contain numbers, upper case letters, or a few special characters. The

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    19/47

    19

    holes were placed into the card by a keypunch machine, that was an electromechanical device with akeyboard and card path through which the cards passed to be punched or read.

    Using the keyboard as the human interface, other key-entry machines have come and gone. Theseinclude key-to-tape devices in which key data was written into 80-column records on magnetic tape(to match the organization of the punched card). They also include stations for key-to-disk and key-

    to-diskette entry. The first took information from the keyboard and placed it onto a fixed disk, whilethe second placed the data on a floppy diskette. Today we normally assume that a PC or PC-likestation will be the entry point and that it will be connected to the computer by a network of somekind.

    Other types of character entry besides keyboards include the mouse , a small movable device whose position on the table is represented by a pointer on the screen; Optical Character Recognition(OCR) devices that read printed characters from a medium by doing pattern recognition; MagneticInk Character Recognition (MICR) used in the banking industry to encode values on checks;Light Pens that are used to indicate a certain point on the screen to which the user wishes to call thecomputer's attention; Touch Panels which can receive input in the form of a person's finger touchinga point on the screen; Bar Codes which are scanned by laser to generate a pattern of 1's and 0's thatcan be interpreted as binary data; Point-of-Sale (POS) terminals which act like computerized cashregisters and checkout stands, where commercial selling is done, and which might have other I/Odevices like laser scanners included within them; and Voice Recognition and generation whichattempts to communicate with the user by the spoken word.

    The word terminal refers to any of a wide variety of keyboard-plus-display machines that caninteract with a user on behalf of a computer. The earliest was the teletype, which could displayinformation to the user by printing it on paper. Video display terminals came into their own only inthe early 1970's, because the semiconductor memory devices needed to store the image for the screenwere not plentiful until that time. We now have completely intelligent terminals such as the PC thatcan do their own processing most of the time, and need to communicate only at certain times.

    Video Display devices are those that can provide a text or graphic image on a surface, usually aCathode Ray Tube (CRT) . The text image is stored as ASCII data, and a refresh circuit circulatesthrough the storage device, or buffer , so many times a second to generate the image on the screen.The circulation of data from the buffer is synchronized with the vertical and horizontal timing of theraster on the display tube so that a stabile display of letters is produced.

    In graphics displays , the field of the CRT's screen is divided into picture elements or pixels . A pixel is a dot of light on the screen, or the place where such a dot of light can be. Resolution is aword the indicates the number of pixels horizontally across the screen, and number of pixelsvertically down the screen, that a give video display can produce. A display with a Video GraphicsArray (VGA) image will have 640 pixels horizontally and 480 vertically. This is a standard

    reference value in current PC technology.

    Each pixel has certain characteristics. These include its size, or dot pitch , which is a function of themanufacturing process of the CRT, and is the diameter of the dot of light in millimeters (e.g. 0.28mmdot pitch), and the number of colors it can represent. This is determined by the video display adapterto which the display itself is attached. It is important to make sure that the display itself and theadapter to which it is to be attached are compatible in sweep speeds and interfacing. It is possible todamage a display if it is connected to an incompatible adapter.

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    20/47

    20

    Displays that can generate only one color are called monochrome displays, while those that docolors are called color or polychrome displays. The early PC had two monitors and display adaptersavailable. The monochrome adapter and display (MCA) generated a green image with acharacteristic character shape that is still with us, but this display was a higher resolution than aregular television and therefore was not compatible with television monitors or standards. It wasmodeled after IBM's mainframe display device, the 3270. The Color Graphics Adapter (CGA) and

    display was designed to be NTSC compatible so that a buyer could use a color home television as adisplay. The resolution was poor, but the device could generate graphics and color reasonably well.In 1984, IBM introduced the Enhanced Graphics Array (EGA) with the PC/AT. This allowed theresolution of the MCA device to be viewed in color. The issue of the use of the home television had

    by this time become unimportant. In 1987, IBM introduced the PS/2 product line and with it theVGA device. This set a baseline standard for display resolution and performance. The ExtendedGraphics Array (XGA) was an attempt by IBM to define a standard for higher-than-VGAresolutions, but most makers did not adhere to the specifications. The Super VGA (SVGA) was

    proposed instead, with a pixel map of 800 (h) x 600 (v) pixels. This has been adapted since by allmakers including IBM, but it was never a fully agreed-upon standard.

    Liquid Crystal Displays (LCD) and their derivative, the Active Matrix Display , also called theThin-Film Transistor Display (TFTD) make use of a liquid crystal sandwiched between two piecesof glass that have been coated with conductive transparent oxides. By controlling the voltage betweenthe two pieces of glass, the crystal liquid will turn either opaque (no light passes through) ortransparent (light passes through). By installing a transistor at each pixel location on the glass, theTFTD can increase the contrast ratio of the opacity of the crystal, generating a clearer, crisper imagethat changes instantly instead of slowly as does the LCD.

    The term printer refers to a variety of devices that place characters on a receiving medium. Themethods of doing this come under the categories of impact printing and non-impact printing .

    Impact printing has its beginnings in the press of Gutenburg, which used fonts, or the shape of thedesired character, carved into blocks of wood in high relief. Each letter had to be carved by hand. Theletters were placed together in a frame so that they were compressed on all sides and did not move.The frame was placed onto a rolling carriage and ink was spread onto the tops of the fonts. Paper wasthen placed onto the fonts; the carriage was rolled under a heavy metal plate, or "platen", which was

    pressed down onto the back side of the paper. The ink on the fonts was thus transferred onto the paper in the shape of the fonts. While this method used pressure rather than a fast-moving impact, itwas nonetheless the beginnings of mechanical printing as we know it.

    Today, a wide variety of printers use some variation of this process. They all have these five things incommon:

    1. The character shape, or font, which can be carved or cast from metal or plastic, or formed by

    a pattern of dots. The term "font" also applies to a space where a character could be but is not.2. Paper, or the medium to which the coloring element or ink is transferred. Paper is the mostcommon, but printing can be done on plastic, metal, wood, or just about any surface.

    3. Ink, usually found in the form of a ribbon that has been saturated with ink. Ribbons can bemade of many fibers, but the standard now is nylon.

    4. The platen, or some related device that provides a backstopping action to the printingmovement.

    5. Physical motion, which brings all of the above together with sufficient pressure or force tocause the ink to be transferred to the receiving medium.

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    21/47

    21

    Almost always we will find that two or more of the five elements of impact printing are combinedinto one physical mechanism. Examples include:

    • Typewriter, in which the font, in the form of a cast slug of metal on the end of an arm, isthrown toward the ribbon so that it impacts the ribbon to transfer the ink to the paper. Therubber roller around which the paper wraps is called the platen, and it serves to backstop the

    flying key. The font and physical motion are combined into one mechanism.• Dot matrix printer , similar to those found in the student labs. In this case, the font consists

    of a dot pattern that is formed by striking the ribbon against the paper with the ends of a set ofwires that are electro-mechanically moved forward, then retracted, at high speed. The fontand physical motion are represented by the print head with the wires inside. The platen is asmall smooth metal piece behind the paper.

    • Drum printers are fund on larger systems and have largely been replaced by large laser printers. They have a metal drum whose surface is covered with fonts in lines and circles suchthat each time the drum makes one complete revolution, every possible font is exposed toevery possible character position. The paper is pushed from behind by a hammer mechanismthat causes the paper to move forward and be pinched between drum and the ribbon. Thecombination here is the platen, formed by the drum, and fonts on it.

    • Chain and Train printers work similarly to drum printers. However, instead of a drum withcharacters on it, the fonts are made on metal slugs that travel around in an oval on a racetrack. The chain, where the slugs are hooked together, or the train, where they are not hooked

    but push each other around, spins across the width of the paper. A hammer mechanism foreach print position fires from behind the paper to press the paper against the ribbon on theother side which is then pressed against the font as it passed by. This method combines fontand platen.

    Non-impact printing uses modern techniques to form characters of a contrasting color on a medium.There have been many non-impact printing methods over the years; the three most common now arethermal printing , where heat is used to form the characters; optical printing , where light is used;and ink jet printing where ink is simply sprayed onto the paper.

    Thermal printing involves a specially treated paper that has a light background tint, but which canturn darker with exposure to heat. The heat is often formed by a pattern of dots that is created as a

    print head passes slowly over the paper surface. The print head consists of a row of diodes encased inglass bubbles that are turned on and off very quickly, and which can heat up or cool down almost asfast. As the glass bubbles on the printhead contact the paper, the current is turned on and then offquickly, causing the glass bubble to heat up, then cool down rapidly. This in turn causes the area ofthe paper which was in contact with the paper at the time of the heating to turn darker, typicallyeither blue or black. This method is used in desk calculators and many credit card and cash registerapplications.

    Optical printing is best illustrated by Laser Printers . These devices have a rotating drum that iscovered with a cadmium sulfide compound that is sensitive to light. When light shines onto the drum,the surface that the light impinges is made to be electrostatically charged. As the drum turns, thecharged area is then exposed to a very fine black powder or toner, which sticks to the areas where thecharge was placed. This area is then further rotated to a point where the toner is transferred to a pieceof paper as the two are pressed together. Finally, the paper is heated as it exits the machine to seal theink into the paper. The character shapes can be drawn onto the rotating drum by a focused laser bean,and this beam can be steered to create the desired pattern of dots. The characters are not whole font -they are formed by very small dot patterns, typically in a resolution of 300 x300 dots per inch.

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    22/47

    22

    Ink jet printing involves the spraying of minute ink droplets onto the paper as a spray nozzle movesacross the width of the page. The ink is pumped under pressure to a nozzle that generates a very finestream. This stream is passed through electrodes that are charged with an ultrasonic signal so that thestream becomes a stream of tiny droplets. These are further "steered" by more electrodes to guide thedroplets up or down as the print head makes its excursion. The result is a finely generated printingthat can come in colors and do excellent graphics.

    Plotters are large printers that generate drawings or graphics as opposed to print. Pen plotters usereal ink pens in varying sizes and widths that are moved over the paper in an X-Y fashion to generatethe desired line drawing. These can be very fast, but have certain limitations on accuracy andresolution. Photoplotters are essentially giant laser printers (although the original ones did not uselasers) which use a dot matrix of light points to generate a high-contrast pattern on film. These areused to create printed circuit boards and integrated circuit device masks.

    A few more terms to round out the printer discussion:

    • Paper Feed techniques include methods of moving paper through a printing mechanism. Themost common form is called a pressure roll or pressure platen technique, in which the paper is

    pinched between two rubber rollers or a roller and a platen which is then rotated to move the paper. Single sheet or cut sheet paper is most frequently used in these machines. Tractorfeed is used in high speed paper motion to move the paper by tractor pins that pass throughholes along the edge of the paper so that the paper is mechanically positively moved. Paperused in this type of machine is called continuous forms .

    • Dot Matrix Printing indicates that the form of the character is made up of a pattern of dots inan X-Y arrangement rather than by complete unbroken lines. Most printers today use thistechnique, as the resolution improves to the point where it is hard to tell the real thing fromthe dot pattern.

    • Near Letter Quality (NLQ) is a term that is used to indicate a dot matrix printed output thatis very close in quality to the results that could be obtained by whole-character, that is impact,

    printing.

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    23/47

    23

    SECTION 3 - Hardware, Part 2

    Data Storage Organization

    When confronted with storing data, and particularly large amounts of data, it is necessary to organizethe bytes of information in a way that makes sense to the nature of the data, and also to the

    mechanism in which the data are being stored. The user wants to see the information in such a waythat makes sense to him or her. For instance, if the user wishes to keep a name and address list ofclub members, the interaction between the user and the computer should be in a way that makessense to the nature of the list. That is, to add a new member to the list, for example, the user wouldenter the member's name on the first line of a screen, the first address line on the second line of thescreen, and the city, state and zip code on the third line. This would match a typical hand-addressedenvelope. The data, however, is not stored in handwritten form, but as bytes on magnetic disk. Thedisk drive, being a mechanical device, has certain characteristics and limitations that must be met if itis to be useful. The data must therefore be converted to a different organization than the simple threelines when it is sent from the screen/keyboard of the system to the drive. It must be reorganized to fitthe limitations of the disk drive device. Also, when data are retrieved from the drive later, it must beconverted from a disk drive organization to a different organization that better fits the understandingof how the user will deal with it. It is up to the computer and the operating system between the userand the drive to make these organization conversions.

    The original standard for data organization was the Hollerith punched card. This piece of stiff paperwas arranged as 80 vertical columns of twelve horizontal rows each. The card could therefore containas many as 80 letters or numbers. Because it was of fixed length, the card and the 80-charactergrouping were referred to as a unit record . The gray covered electromechanical machines used toread, punch, process, and print the data on the cards were called unit record machines . From theinception of IBM up through the mid-1950's, unit record machines were the mainstay of the data

    processing industry.

    In the early days of computers, the group of 80 characters was maintained as a reference quantity ofdata. The record , as a standard unit of data, was composed of one or more fields , which in turn werecomposed of one or more characters. An example would be a punched card or computer record thatcontained the entry of one person's name in a list of names for a club membership. The first fieldmight contain the member's name, and be 20 characters long. The second field could be an address of20 characters; the third might be a city name of 15 characters; the fourth field might be a state codeof 2 characters; and the next field might be a ZIP code of 5 characters. Together, these 62 charactersmake up one member's address for the club roster.

    Notice that in the membership list, the fields each represent one part of an address or identify the person. Together, all the fields make up a record that identifies the person and his/her address. Theorganization of these data must make sense to the user, the person working with the information. It is

    organized as one might arrange a holiday card list or other simple mailing list. It makes sense to theuser to organize the information this way.

    However, the storage device design is such that it doesn't know about the nature of the data; indeed,disk drives are dumb devices. The disk drive's electronics know only how to find tracks and sectors.So, in between the user at the keyboard and screen and the disk drive is the computer along with theoperating system that together rearrange the organization of the data as they pass between the sourceand destination. If the data are going to the disk from the screen/keyboard, then the data are taken outof the mailing list organization described above which made sense to the user and arranged into

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    24/47

    24

    sectors so that the data can be written on the disk surface. When the data are retrieved, the computerand operating system reorganize the data in reverse. A major amount of work is done to accomplishthese conversions. A significant amount of the operating system is dedicated to disk handling.

    As technology progressed away from the punched card to screen and keyboard data entry, the unitrecord gave way to a more general arrangement of data coming from the source. The idea of

    characters, fields, and records remained. In addition, the records collectively were grouped into a file .So a file is one or more records of data. The number of characters in a field, and therefore in a recordand file, can now be variable; we no longer need to deal with a fixed length of 80 characters.Programmers today deal with data storage in an infinite number of ways that make sense to the natureof the data being stored, be it accounting data, scientific data, school records, or word processingdocuments. However, when the data are sent to the disk, the system hardware and software mustmake the conversion to the arrangement that the disk drive can accommodate. The file is the unit thatappears in the directory of a disk drive. If you issue the DIR command at a DOS prompt on a PC, thelisting you get is at the file level. It is assumed that each of the files listed contain one or morerecords made up of one or more fields that are made up of one or more characters. To see what isinside a file, you must execute some sort of program or DOS command that will show the file to you.

    A database is made up of one or more files that contain data of a related nature. Again, just how thedata are arranged between these files is up to the programmer who creates a file set that make senseto the nature of the data and the nature of the use to which it will be applied. One of the files usuallycontains either all the data as a base reference, or, if not all the data, at least the essential data againstwhich the other files may be referenced. This most important file is called a master file .

    Fields come in different types, too. First, there is a key field , which is regarded in the database as being the one first looked at by the program that is using the data. For instance, to prepare themonthly meeting notice for the club membership, the corresponding secretary might define the ZIPcode field of the mailing list records as the most important. This is because when mailing largenumbers of fliers, the post office will charge less per piece if they are presorted in ZIP code order.When preparing the list for printing, sorting the records into ZIP code order via the key field willsave the club mailing costs.

    Fields can be described by the nature of the data they contain. Fields which contain only alphabeticletters are called alphabetic fields ; those that contain only numbers are called numeric fields ; thosethat contain a mix of letters and numbers are called alphameric or alphanumeric fields . Alphabeticfields and alphanumeric fields contain data that usually are stored as-is. Numeric fields, however, arekept pure so that their contents can go directly to a mathematical processing routine. Numeric fieldscan also be compressed and stored in dense form; this saves disk space if the amount of numbers to

    be stored is large. There is also a logical field , composed of one or more bytes, whose contents or bit positions represent answers to "yes or no" questions. For instance, it would possible to store a single byte in the record along with the name and address to indicate 8 different yes-or-no answers. These

    could include "has the member paid this year's dues? Yes or No", with a 1 for a yes and 0 for a no.

    Records have a set of characteristics as well. The most obvious is whether the record is a fixed-length record or a variable-length record . The fixed-length record is easy to deal with since all therecords are the same length. This is easily seen in the punched card, where the data were physically afixed length as well as logically. Dealing with this kind of record is easy to do in programming.Accordingly, this type of record storage is the most common and used most of the time. Variable-length records mean that the length of the records are not a fixed size, but can be longer or shorterthan the last because there is no reason to store blank characters if shorter, and there is a reason to

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    25/47

    25

    store non-blank characters if longer. Programming with variable-length records is difficult because amethod must be devised to determine where one record ends and the next begins - the programmercan no longer depend on a fixed number of characters per record.

    At the file level, the method of approach to accessing data in a file can take several directions. Thefirst question is how large the body of data are that is to be stored in the file. For example, a credit

    card company might have millions of customers, many of whom have multiple cards. How do youfind the one client's records in all those million? Several ways of storing the data within the fileaddress this question.

    The simplest and most obvious way of storing data are the sequential file . This file contains recordsin an order sorted by a key field within the records. Again, a file sorted to ZIP code order is a goodexample. To find the address of a single person within the file, the program begins at the beginning,and looks at the first record to see if it is the one desired. If it is, the search is over quickly. However,if the first record of the file is not the one desired, the program then reads in the second record, andtries again. If the record we are looking for is close to the beginning of the file, it takes little time tofind it. However, if the record we want is near or at the end of the file, it might take long time to gothrough the thousands of records we don't want to find the one we do want. This usually isunacceptable in anything other than small data sets.

    An improvement on the sequential method of file access is the indexed-sequential file . This methodconsists of two files. The first is the large file of many records that contains all the details about each

    person in the club or credit card client or machined part. This file is in random order; it is notnecessary to keep it organized. The only thing we need to do is to make sure that the records arefilled in correctly. Then, we build a small file called the index file , which acts like the index in atextbook. At the beginning of the processing session, a pass is made through the large file, and thekey fields of each record, along with the position of that record in the large file, is stored in the indexfile. When complete, the index file contains 2 pieces of information about each of the master filerecords: The key field contents, and the location of that record in the master file. When we wish tofind a particular record, we look it up sequentially in the index file - this takes little time because thefile is small and the entries in it are short. When we find the item we want, the entry in the index filegives us the record location for that item in the big file. So we take this information and find the itemin the big file directly, that is, without going through all the entries ahead of the one we want.

    Another method of finding information in a large file is called a binary search . In this method, thelarge file is sorted by a key field in each record. This takes time, but it puts all the records in somesort of logical ascending order. Again, the ZIP code field in a large set of records is a good example.When we wish to find a particular entry in the file, we go to the record in the middle of the file,obtain its key field data, and compare it to the one we want. If the desired data has a higher valuethan the record obtained from the middle of the file, we know that the one we want is in the secondhalf of the file, above our current location. We therefore know that the desired data are not in the first

    half of the file. Conversely, if the desired key field data are less than that of the middle record of thefile, we know that the data we want are in the first half of the file. Immediately, we have eliminatedhalf the file as not having the data we want.

    We continue with the half of the file that contains our data, and again to the middle of that group ofrecords. Again, the item we want is either above the middle of the second half (the upper 1/4 of thefile), or below the middle of the second half (the third 1/4 of the file). Similarly, we can do this"divide and conquer" over several times until we zero in on the target record. This is a very fast

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    26/47

    26

    method, and it takes the same amount of time to find any record in the file, regardless of whether thedesired record is at the beginning of the file, at the end of the file, or in the middle.

    A Few Words About Magnetism

    A student of basic electronics or physics soon is confronted with the ideas and theories behind

    magnetism. Unlike electronic current flow, in which actual matter, the electron, is moving,magnetism is concerned with pure energy levels that have no weight and take no space. As such, it issometimes difficult for the student to visualize the ideas behind it.

    Electron flow in a copper wire or other conductor is the result of a pressure placed on the ends of thewire that the electrons within the wire cannot resist. The electrical pressure is called ElectromotiveForce, or EMF, and its unit of measure is the Volt. EMF is created by storing a bunch of electrons atone end of the wire and an bunch of positive ions, or atoms missing electrons, at the other end of thewire. The electrons in the copper atoms within the wire feel an attraction for the positive-ion end ofthe wire and are repelled by the end that has too many electrons already. These electrons, therefore,tend to move toward the positive end of the wire. When electrons move, it is said that we haveCurrent flowing in the wire. The unit of measure of current is the Ampere.

    Electrons are spinning about their own axes as the move along the wire. This spinning creates amagnetic field between the poles of the electron, just like the earth's magnetic field between the

    North and South Poles. As the electron moves along, it takes its magnetic field with it. This travelingfield is the basis of the science of electromagnetics. This is the science of magnetic lines of forcecreated by the movement of electrons. It provides us with all the theory necessary to build motors,generators, electric lights, stereo sets, radio and television, and all the goodies of the plug-in world.

    Magnetism is made up of lines of magnetic force. As we said, these are pure energy, not matter inmotion. It is the same basic idea of energy as the light showering down from the fluorescent tubes inthe classroom ceiling. If light were matter, we would gradually fill the room with it, and we would allwalk around glowing on the head and shoulders where the light had fallen. Magnetic lines of force,like electrons, travel in some materials better than others, just like electrons. Iron, nickel, cobalt, andvarious alloys are used to conduct lines of force. However, where electrons won't travel throughthings like wood, plastic, and glass, lines of force pass through these unchanged. So electrons don'tflow unless they are allowed to, while lines of force flow unless they are stopped.

    If we take a wire and wrap it about a core made of a magnetic substance, and then pass an electroncurrent through the wire, the lines of force created by the moving electrons will be concentrated intothe magnetic core. This in turn will tend to hold the lines, and may continue to hold some after thecurrent is turned off. Lines of force in a core that has no current nearby is called residualmagnetism .

    If we take a coil of wire and connect it to a sensitive meter or measuring device, and then pass a corewith residual magnetism past or through the coil, the meter will indicate that as the core passed, acurrent attempted to flow and an electromotive pressure was created. If a complete path from one endof the wire to the other is present, the current will indeed flow because the magnetic fields of theelectrons (remember they are spinning) will interact with the passing magnetic field of the core andthis will force the electrons to move - this is called motor action . If the ends of the wire coil areconnected to an amplifier device, the electromotive pressure or voltage built up at its ends can beseen by the circuitry and put to use, perhaps as 1's and 0's.

  • 8/20/2019 Ders Notlari bilgi teknolojileri

    27/47

    27

    Magnetic Data Storage

    We can take advantage of these phenomena with the laws of Physics dealing wi


Recommended