The development of computers is the story of man’s continuous search for better and faster ways to count, write and communicate. Although modern electronic computers are only a recent phenomenon, the ideas and devices leading to the advent of computers date for back in the history. Early man, when started to live a settled life in the stone age, used pebbles and stone for counting items, and used marking on the walls for storing the counts. The discovery of zero by Indian mathematician laid the foundation stone of the number system. We can identify three distinct stages in the evolution of computers from these simple ideas and devices to the complex and sophisticated devices as they are now:
- Early Computing Devices
- Early Electronic Computers
- Modern Computers
Early Computing Devices
The earliest device that qualifies, as a digital computer is the Abacus also known as Soroban This device permits the user to represent number by the position of beads on a rack. Simple addition and subtraction can be carried by out rapidly and efficiently by positioning the beads appropriately. Although, the Abacus was invented around 600 BC, it is interesting to note that it is still used in the Far East and its users can calculate at amazing speeds.
Another manual calculating device was Napier’s Bones, designed by John Napier, a Scottish scientist. It used a set of eleven rods called ‘bones’ with numbers carved on them. It was designed in early 17th century and its upgraded versions were in use even around 1890. Pascaline, designed by Blaise Pascal in 1642, was the first mechanical calculating machine. Pascaline used a system of gears, cogwheels and dials for carrying out repeated additions and subtractions. Later in 1671, Baron Gottfried von Leibitz of Germany developed a similar mechanical calculator that could also perform multiplication and divisions. In 1820, a similar key – driven mechanical calculator, called Thomas Arithmometer met with commercial success. It was developed by Charles Xavier Thomas of France.
19th century witnessed major advances in computing technology. An important breakthrough in the development of computing was the concept of stored program to control calculations. In the early 19th century a Frenchman, Joseph Jacquard invented a loom that used punched cards to automatically control the manufacturing of patterned cloth. Jacquard’s idea of storing a sequence of instructions on the cards is also conceptually similar to modern computer programs.
The year 1822 could be considered a golden year in the history of Computer Science. It was in the year 1822, when an Englishmen, Charles Babbage, a Professor of Mathematics at Cambridge University, developed and demonstrated a working model of a mechanical computer called Difference Engine. It could do complex algebraic equations and was built on the principal of gearing wheels of earlier era. It was also able to produce mathematical and statistical tables correct up to 20 digits.
Encouraged by success of Difference Engine; Babbage, in 1833, came out with his new idea of Analytic Engine. It could store 1000 numbers of 50 decimals each. It was to be capable of performing the basic arithmetic function for any mathematical problem and it was to do so at an average speed of 60 additions per minute. Unfortunately, he was unable to produce a working model of this machine mainly because the precision engineering required to manufacture the machine was not available during that period.
However, his efforts established a number of principles, which have been shown to be fundamental to the design of any digital computer. There were many features of the Engine – punched card instructions, internal memory and an arithmetic unit to perform calculation – that were incorporated in the early computer, designed a 100 year later. His disciple, a brilliant mathematician, Lady Ada Lovelace, the daughter of the famous English poet Lord Byron, developed the binary number system for Bandage’s machine. She is widely considered as the first programmer in the world, and programming language ADA is named after her. Babbage’s Engines were more closely related to modern computers than any of their predecessors. Many people today credit Charles Babbage with the real father of computers.
Keyboard machines originated in the United State around 1880 and are extensively used even today. Around this period only, Herman Hollerith came up with the concept of punched cards, which are extensively used as input media in modern digital computers. Business machines and calculators made their appearance in Europe and America towards the end of the nineteenth century.
In 1880’s the Census Bureau of the United States appointed Harman Hollerith to develop a technique for speeding up the processing of census data. It was the major breakthrough of 19th century when the developed a machine which used punched cards to store the census data. This innovation reduced the time of processing from 8 years to less than 3 years.
In 1890’s many other countries like Canada, Australia, and Russia also used Hollerith Machine for processing their census data. Later many other large organizations like Insurance Company used Hollerith Machine to speed up their processing activity. In 1890’s Hollerith left the Census Bureau and started the Tabulating Machine Company, which, in 1911, merged with several other firms to form International Business Machine (IBM) Corporation.
During the early part of the 20th century there was a flurry of activities in the computing field. Due to the on set of World War II, there was a great need for devices, which can produce ballistic tables very quickly. In the period of late 1930’s and early 1940’s many projects went underway.
In this period, under the direction of George Stibitz of Bell Telephone Laboratories, five large-scale computers were developed. These computers were built using electromechanical relays and were called as Bell Relay Computers. These were capable to perform calculations with a high speed and accuracy.
The world’s first electro-mechanical computer was developed by Dr Howard Aiken of Harvard University and produced by IBM in the year 1944. This computer used more than 3000 relays, was 50 feet long and 8 feet high and was named as Automatic Sequence Controlled Calculator also called Mark-1. It was a quit fast machine and could add two numbers in 0.3 seconds and multiple two number in 4.5 seconds. This computer may be regarded as the first realization of Babbage’s Analytical Engine. IBM went on to develop Mark –II through IV.
Early Electronic Computers
- Atanasoff-Berry Computer(1939-42)
This electronic machine was developed by Dr John Atanasoff to solve certain mathematical equations. It was called Atanasoff-Berry Computer, or ABC, after its inventor’s name and his assistant, Clifford Berry. It used 45 vacuum tubes for internal logic and capacitors for storage.
The Electronic Numerical Integrator And Calculator (ENIAC) was the first all electronic computer. It was constructed at the Moore School of Engineering of the University of Pennsylvania, USA by a design team led by Professors J. Presper Eckert and John Mauchly.
ENIAC was developed as a result of military need. It took up the wall space in a 20 x 40 square feet room and it used 18,000 vacuum tubes. The addition of two numbers was achieved in 200 microseconds, and multiplication in 2000 microseconds. Although, much faster in speed as compared to Mark I computer, ENIAC had two major shortcomings: it could store and manipulate only very limited amount of information, and its programs were wired on boards. These limitations made it difficult to detect errors and to change the programs. Hence its use was limited. However, whatever be the shortcomings of ENIAC, it represented an impressive feat of electronic engineering and was used for many years to solve ballistic problems.
The operation of ENIAC was seriously handicapped by the wiring board. This problem was later overcome by the new concept of stored program developed by Dr John V Neumann. The basic idea behind the stored program concept is that a sequence of instructions as well as data can be stored in the memory of the computer for the purpose of automatically directing the flow of operations. The stored program feature considerably influenced the development of modern digital computers and because of this feature we often refer to modem digital computers as stored program digital computers The Electronic Discrete Variable Automatic Computer (EDVAC) was designed on stored program concept. Von Neumann has also got a share of the credit for introducing the idea of storing both instructions and data in the binary form instead of the decimal numbers or human readable words.
Almost simultaneously with EDVAC of USA, the Britishers developed Electronic Delay Storage Automatic Calculator (EDSAC). The machine executed its first program in May 1949. In this machine, addition operation was accomplished in 1500 microseconds, and multiplication operation in 4000 microseconds. The machine was developed by a group, of scientists headed by Professor Maurice Wilkes of the Cambridge University.
- MANCHESTER MARK-I (1948)
This computer was a small experimental machine based on the stored program concept. It was designed at Manchester University by a group of scientists heated by Professor M H A Newman. Its storage capacity was only 32 words, each of 31 binary digits. This was too limited to store data and instructions. Hence, the Manchester Mark I was hardly of any practical use.
- UNIVAC -1(1951)
The Universal Automatic Computer (UNIVAC) was the first digital computer, which was not “one of a kind”. Many UNIVAC machines were produced, the first of which was installed in the Census Bureau in 1951 and was used continuously for 10 years. The first business use of a computer, a UNIVAC-I, was by General Electric Corporation in 1954. In 1952, the International Business Machines (IBM) Corporation introduces the 701 commercial computers. In rapid succession, improved models of the UNIVAC I and other 700-series machines were introduced. In 1953, IBM produced the IBM-650 and sold over 1000 of these computers.
The commercially available digital computers that could be used for business and scientific applications had arrived. During the period of late 1940’s and early 1950’s many other stored program computers like ILLIAC, JOHNIAC, MANIAC etc. were developed. With these developments in the computing field and advancement in technology, the stage was set for modern computers.
The Modern Computers
Early electronic computers were exclusively used for the military, experimental and engineering purposes. In the early 1950’s computers began to be sold commercially. The development of commercial computer industry was really the beginning of the computer revolution.
The modern computers era can be divided into four generations distinguished by the basic electronic component within the computer. Each new logic unit has led to computers that are faster in speed, smaller in size, more reliable and less expensive than their predecessors. Modern computers came in a variety of shapes, sizes and costs.
The Computer Generations
“Generation” in computer talk is a step in technology. It provides a framework for the growth of the computer industry. Originally, the term “generation” was used to distinguish between varying hardware technologies. But nowadays, it has been extended to include both the hardware and the software together that make up an entire computer system.
The custom of referring to the computer era in terms of generations came into wide use only after 1964. There are totally five computer generations known till today. Although there is a certain amount of overlapping between the generations, the approximate dates shown against each are normally accepted.
- First-Generation Computers (1942-1955)
First-generation computing involved massive computers using hundreds or thousands of vacuum tubes for their processing and memory circuitry. These large computers generated enormous amounts of heat; their vacuum tubes had to be replaced frequently. Thus, they had large electrical power, air conditioning, and maintenance requirements. First-generation computers had main memories of only a few thousand characters and millisecond processing speeds. They used magnetic drums or tape for secondary storage. Examples of some of the popular first generation computers include ENIAC, EDVAC, UNIVAC-I, IBM-701, IBM-650, and IAS Machine.
- Second-Generation Computers (1955-1964)
Second-generation computing used transistors and other solid-state, semiconductor devices that were wired to circuit boards in the computers. Transistorized circuits were much smaller and much more reliable, generated little heat, were less expensive, and required less power than vacuum tubes. Tiny magnetic cores were used for the computer’s memory, or internal storage. Many second-generation computers had main memory capacities of less than 100 kilobytes and microsecond processing speeds. Removable magnetic disk packs were introduced, and magnetic tape emerged as the major input, output, and secondary storage medium for large computer installations. Examples of some of the popular second-generation computers include IBM-1620, IBM7094, CDC-1604, CDC-3600, UNIVAC-1108, PDP-I and NCR-304.
- Third-Generation Computers (1964-1975)
Third-generation computing saw the development of computers that use integrated circuits, in which thousands of transistors and other circuit elements are etched on tiny chips of silicon. Main memory capacities increased to several megabytes and processing speeds jumped to millions of instructions per second (MIPS) as telecommunications capabilities became common. This made it possible for operating system programs to come into widespread use that automated and supervised the activities of many types of peripheral devices and processing by mainframe computers of several programs at the same time, frequently involving networks of users at remote terminals. Integrated circuit technology also made possible the development and widespread use of small computers called minicomputers in the third computer generation.
Examples of some of the popular third-generation computers include IBM-360 Series, IBM-370 Series, HCL-2900 Series, Honeywell-6000 Series, PDP-8 and VAX.
- Fourth-Generation Computers (1975-2000)
Fourth-generation computing relies on the use of LSI (large-scale integration) and VLSI (very-large-scale integration) technologies that cram hundreds of thousands or millions of transistors and other circuit elements on each chip. This enabled the development of microprocessors, in which all of the circuits of a CPU are contained on a single chip with processing speeds of millions of instructions per second. Main memory capacities ranging from a few megabytes to several gigabytes can also be achieved by memory chips that replaced magnetic core memories. Microcomputers, which use microprocessor CPUs and a variety of peripheral devices and easy-to-use software packages to form small personal computer (PC), systems or client/server networks of linked PCs and servers, are a hallmark of the fourth generation of computing, which accelerated the downsizing of computing systems. Examples of some of the popular fourth-generation computers include DEC-10, STAR1000, PDP-II, CRAY-I (Supercomputer), CRAY-X-MP (Supercomputer), CRAY-2 and IBM PC/AT.
- Fifth-Generation Computers (2000-…)
Computer scientists and engineers are now talking about developing fifth -generation computers, which can ‘think’. The emphasis is now shifting from developing reliable, faster and smaller but ‘dumb’ machines to more ‘intelligent’ machines. Fifth-generation computers will be highly complex knowledge processing machines. Japan, USA and many other countries are working on systems, which use Artificial Intelligence. Automatic Programming, Computational Logic Pattern Recognition and Control of Robots are the processes, which are used in these computers. The speed of the computers will be billions of instructions per second, and will have unimaginable storage capacities. These computers will be interactive.
- will be able to do multip1e tasks, simultaneously
- will have a parallel structure as compared to the serial structure of fourth generation
- will be able to do multip1e tasks, simultaneously
- will have a parallel structure as compared to the serial structure of fourth generation
- will not be algorithmic
- will be knowledge processing and not data processing and architecture will be KIPS (Knowledge Information Processing System)
- applications will be based on Expert Systems
- will interact with user in human language
- very cheap, super speeds
- decision-making capabilities
Figure 3-3 highlights trends in the characteristics and capabilities of computers. Notice that computers continue to become smaller, faster, more reliable, less costly to purchase and maintain, and more interconnected within computer networks. In terms of the analogy with the automobile industry, if the automobile industry had grown like the computer industry, the cost of a Rolls Royce would be around Rs 80 and would run a million miles on a litre of petrol but the size would be that of a match box. Even though computers in the last 50 years have become very fast, reliable and inexpensive, the basic logical structure proposed by Von Neumann has not changed. The basic block diagram of a CPU, memory and I/O is still valid today. With the improvements in integrated circuit technology, it is now possible to get specialized VLSI chips at a low cost. Thus an architecture, which makes use of the changes in technology and allows an easier and more natural problem solving, is being sought.
In 1965 Gordon E Moore predicted, based on data available at that time that the density of transistors in integrated circuits will double at regular intervals of around 2 years. Based on the experience from 1965 to date, it has been found that his prediction has been surprisingly accurate. In fact the number of transistors per integrated circuit chip has approximately doubled every 18 months. The observation of Moore has been called “Moore’s Law”. In Figure 3-4 we have given two plots. One gives the number of transistors per chip in Dynamic Random Access Memory along the y-axis and years along x-axis. Observe that the y-axis is a logarithmic scale and the x-axis a linear scale. The second plot gives the number of transistors in microprocessor chips. Observe that in 1974 the largest DRAM chip had 16 Kbits whereas in 1998 it has 256 Mbits, an increase of 16000 times in 24 years. The increase in the number of components in microprocessors has been similar. It is indeed surprising that the growth has sustained over 30 years. The availability of large memory and fast processors has in turn increased the size and complexity of systems and applications software. It has been observed that software developers have always consumed the increased hardware capability faster than the growth in hardware. This has kept up the demand for hardware.
Another interesting point to be noted is the increase in disk capacity. In 1984 disk capacity in PCs was around 20 MB whereas it was 20 GB in 2000 – a 1000 fold increase in about 16 years again doubling every 18 months, which is similar to Moore’s law. These improvements in capacity of computers have come about with hardly any increase in price. In fact the price of computers has been coming down.
The implication of Moore’s law is that in the foreseeable future we will be getting more powerful computers at reasonable cost. It will be up to our ingenuity to use this increased power of computers effectively. It is clear that a number of applications such as speech recognition – voice and video user interfaces which require large amount of memory and computing power will be extensively used.
Classification Of Computers
The various generations of computers actually show the development in the computers from the early stages. But, even today, all the computers are not of the same type. Computers come in many different sizes and ranges of power, and different types of computer systems have varying capabilities.
Computers can be divided into following categories by functional criteria (data representation):
- Digital Computers
- Analog Computers
- Hybrid Computers
A digital computer, as the name suggests, works with digits. In other words, a digital computer is a counting device. All the expressions are coded into binary digits (0s and 1s) inside the computers and it maintains and manipulates them at a very fast speed. A digital computer can perform only one operation i.e. addition. The other operations of subtraction, multiplication and division are performed with the help of addition operation. The digital computer circuits are designed and fabricated by the manufacturers and are quite complicated ones. A digital computer manipulates data according to the instructions (program) given to it in a certain language. The instructions and data are fed to the computer in the form of discrete electrical signals.
- General Purpose Computers:
These computers are designed for use in different types of applications in different areas. They can be used to prepare pay bills, manage inventories, print sales or can be used to solve mathematical equations. When one job is over another job can be loaded into the memory for processing. They are versatile; hence most businesses today use general-purpose computers.
- Special Purpose Computers:
The digital computers, which are made to meet requirements of a particular task or job, are called special purpose computers. For example, computers used for weather forecasting or for space applications. They are also known as dedicated computers. The typical special purpose computers are:
- Word Processor: This computer is most versatile for office automation purposes and replaces the typewriters. It is widely used for the production of office documents, letters, contracts, pay bills etc. It is a computer that has to deal with the bulk of input, in production of bulk of printed output, not involving too many calculations or connecting programs. It also works as a duplicating machine with a very high speed and facilitates seeing through the old and other connected documents as and when desired.
- Hidden Computers: Such computers are designed to control particular process or job and as such are installed inside the machine being automatised. These are digital computers with hybrid applications. LAN and WAN, using the electronic message switching systems employ such computer robots in intensive care units and other hospital instruments. Automatic washing machines, digital clocks, Hot shot camera use these computers.
- Optical Computers: These are under process of design and development. Here the concept of application of quadral logic in place of binary logic and use of optical fibre technology is being used.
- Hand Held and Pocket Computers: These are very small in size with memory of 4 to 16 Kb. They are used for personal and scientific computing.
- Knowledge Information Processing Systems (KIPS): They process information and not data. They incorporate artificial intelligence, (ability of reasoning, thinking and decision making). In such computers, hardware and software will work in parallel memory operations having capabilities of 100 million to a billion lips (logical interferences per second). A lip is equal to 100 to 1000 instructions per second.
- Dedicated Word Processor: It is used for office automation. It is widely used for production of office documentation, letters, memos, contracts, pay bills etc.
Analog computers represent numbers by a physical quantity i.e. they assign numeric values by physically measuring some actual property, such as the length of an object, or the amount of voltage passing through a point in an electric circuit. Analog computers derive all their data from some form of measurement. Though effective for some applications, this method of representing numbers is a limitation of the analog computers. The accuracy of the data used in an analog computer is directly related to the precision of its measurement. Speedometers, Volmeters, Pressure Gauges, Slide Rules, Flight Simulators for training pilots and Wall Clocks are some examples of analog computers.
Hybrid computers combine the best features of analog and digital computers. They have the speed of analog computers and accuracy of digital computers. They are usually used for special problems in which input data derived from measurement is converted into digits and processed by computer. Hybrid computers for example, control National Defense and Passenger flight radars.
Consider the Hybrid computer system in a hospital Intensive Care Unit (ICU). The analog device may measure a patient heart function, temperature and other signs. These measurements may, then, be converted into numbers and supplied to a digital device, which may send as immediate signal to a nurse’s station if any abnormal readings are detected.
To take another example of Hybrid computer, consider the system used in producing iron ore pellets for steel making. It controls manufacturing and prepares production data on inventory and costs. The computer accepts data both from sensors within the production area and from conventional Input/Output devices. Using the production data, the computer plans for future manufacturing and distributes existing inventories to activities that require the computer to act as digital computer. The computer can act like an analog computer converting measurements into numeric numbers. It can act as a digital computer processing stored data for management.
We can also classify the computer systems into following categories by using the capacity performance criteria (size, cost, speed and memory):
- Mainframe computers
- Minicomputers, or Midrange computers
- Microcomputers, or Personal computers
All of these computers can be connected to form networks of computers, but each individual computer, whether or not it is on a network, falls into one of these five categories. As we will see, some of these categories-especially microcomputers-can be divided into subcategories, some of which are growing rapidly enough to become major categories in their own right.
Supercomputers are the most powerful computers made, and physically they are some of the largest. These systems are built to process huge amounts of data, and the fastest supercomputers can perform more than trillion calculations per second. Some supercomputers such as the Cray T-90 system; can house thousands of processors. This speed and power make supercomputers ideal for handling large and highly complex problems that require extreme calculating power such as numerical whether prediction, design of supersonic aircraft, design of drugs and modeling of complex molecules. Recently the use of supercomputers has expanded beyond scientific calculations. They are now used to analyse large commercial databases, produce animated movies and play games such as chess.
Super computers can cost tens of million of dollars and consume enough electricity to power dozens of homes. They are often housed in protective rooms with special cooling systems, power protection, and other security features. Because of their size and cost, supercomputers are relatively tare, used only by large corporations, universities, and government agencies that can afford them. Supercomputing resources are often shared to give researchers access to these precious machines.
The largest type of computer in common use is the mainframe. Mainframe computers are used in large organizations like insurance companies and banks where many people need frequent access to the same data, which is usually organized into one or more huge databases.
Mainframes are being used more and more as specialized servers on the World Wide Web, enabling companies to offer secure transactions with customers over the Internet. If we purchase an air line ticket over the Web, for example, there is a good chance that our transaction is being handled by a mainframe system. In this type of application, the mainframe system may be referred to as an enterprise server or an electronic commerce (e-commerce) server.
In a traditional mainframe environment, each user works at a computer terminal. A terminal is a monitor and a keyboard (and sometimes a pointing device, such as a mouse) wired to the mainframe. There are basically two types of terminals used with mainframe systems. A dumb terminal does not have its own CPU or storage devices; these components are housed in the mainframe’s system unit and are shared by all users. Each dumb terminal is simply an input/output (I/O) device that functions as a window into a computer located somewhere else. An intelligent terminal, on the other hand, has its own processor and can perform some processing operations. Intelligent terminals, however, do not usually provide any storage.
Many enterprises are now connecting personal computers and personal computer networks to their mainframe systems. This connection gives users access to mainframe data and services and also enables them to take advantage of local storage and processing, as well as other features of the PC or network.
A mainframe system can house an enormous volume of data, containing literally billions of records. Large mainframe systems can handle the input and output requirements of several thousand terminals. The IBM S/390 mainframe, for example, can support 50,000 users simultaneously while executing more than 1,600,000,000 instructions per second. It used to be common for mainframe computers to occupy entire rooms or even an entire floor of a high-rise building. Typically, they were placed inside glass offices with special air conditioning to keep them cool and on raised floors to accommodate all the wiring needed to connect the system. This setup is not used much anymore. Today, a typical mainframe computer looks like an unimposing file cabinet-or a row of file cabinetsalthough it may still require a somewhat controlled environment.
First released in the 1960s, minicomputers got their name because of their small size compared to other computers of the day. The capabilities of minicomputer are between that of mainframes and personal computers. (For this reason, minicomputers are increasingly being called midrange computers). Like mainframes, minicomputers can handle much more input and output than personal computers can.
Although some “minis” are designed for a single user, most are designed to handle multiple terminals in a network environment that handle the data sharing needs of other computers on the network. The most powerful minicomputers can serve the input and output needs of hundreds of users at a time. Single-user minicomputers are commonly applied to sophisticated design tasks such as animation and video editing.
Somewhere between multi-user midrange computers and personal computers are workstations. Workstations are specialized, single-user computers with many of the features of a personal computer but with the processing power of a minicomputer. These powerful machines are popular among scientists, engineers, graphic artists, animators, and programmers-users that need a great deal of number-crunching power. Workstations typically use advanced processors and feature more RAM and storage capacity than personal computers. Workstations often have large, high-resolution monitors and accelerated graphics-handling capabilities, making them perfect for advanced design, modeling, animation, and video editing. Although workstations are often found in singleuser applications, they are more and more used as servers on personal computer networks and as Web servers.
Until a few years ago, the term workstation implied certain differences in terms of chip design and operating system, making it distinct from a personal computer. (The term workstation is also used to describe a single computer on a network. In this context; a workstation is usually a personal computer.) Today, the differences between minicomputers, workstations, and personal computers are becoming blurred. Low-end minicomputers and high-end workstations are now similar in features and capabilities. The same is true for low-end workstations and high-end personal computers.
Some manufacturers of workstations are Silicon Graphics (SIG), Digital Equipment Corporation (DEC), IBM, SUN Microsystems and Hewlett Packard (HP). The standard Operating System in workstations is UNIX and its derivatives such as AIX (IBM), Solaris (SUN) and HP-UX (HP).
MICROCOMPUTERS OR PERSONAL COMPUTERS
The terms microcomputer and personal computer are interchangeable, but PC, which stands for personal computers- sometimes, has a more specific meaning. In 1981, IBM called its first microcomputer the IBM–PC. Within a few years, many companies were copying the IBM design, creating “clones” or “compatibles” that were meant to function like the original, for the reason, the term PC has come to mean the family of computers that includes IBMs and IBM compatibles. The vast majority of microcomputers sold today are part of this family.
One source of the PC’s popularity is the rate at with improvements is made in its technology. Microprocessors, memory chips, and storage devices make continual gains in speed and capacity, while physical size and price remain stable – or in some cases are reduced. For example, compared to the typical PC of ten years ago, a machine of the same price today will have about ten times as much RAM, about 100 times more storage capacity, and a microprocessor at least 100 times as fast. What’s more, many analysts believe that this pace of change will continue for another 10 or 20 years. The microcomputer category has grown tremendously in the past decade. There are now several specific types of microcomputers, each with its own capabilities, features, and purposes. Within each subcategory of microcomputer, we can find dozens or even hundreds of unique systems. This range of options makes the microcomputer “the computer for the masses” and explains why so many systems have appeared in offices, homes, briefcases, and even pockets over the past few years. Microcomputers include the following types:
- Desktop Models including tower models
- Notebook Computers, also called Laptop Computers
- Network Computers
- Handheld Personal Computers (H/PCs) of all types
- DESKTOP MODELS
The first style of personal computer introduced was the desktop model. In common usage, the term desktop system means a full size computer that is small enough to be used at a desk but too big to carry around. Traditionally, a desktop computer’s main case (called the system unit) is horizontally oriented, meaning it can lie flat on a desk or table. A variation of the desktop system is the tower model, where the system unit sits vertically and has more space for devices. Because of its design, the system unit is often placed on the floor to preserve desk space, allowing more room to place external components, such as removable disk drives or scanners, on the desktop. Tower models have become increasingly popular in recent years – so much so that some PC maker has stopped offering horizontally oriented desktop system.
- NOTEBOOK COMPUTERS, ALSO CALLED LAPTOP COMPUTERS
Notebook computers are small, easily transportable, lightweight microcomputers that fit easily into a briefcase. Laptops and notebooks are designed for maximum convenience and transportability, allowing the users to have access to processing power and data without being bound to the office environment. The ability of providing Internet access anytime anywhere is an additional benefit of these computers.
- NETWORK COMPUTERS
Network computers are major new microprocessor category designed primarily for use with the Internet and corporate Intranet by clerical workers, operational employees and knowledge workers with specialized or limited computing applications. Network computers are low cost, sealed, networked microcomputers with no or minimal disk storage. Users of network computers depend primarily on Internet and corporate Intranet servers for their operating system, and web bowers, Java-enabled applications software and data access and storage. SUN’S JAVA Station, IBM’S Network Station and NCD Explora Network Computers are some examples of network computers.
- HANDHELD PERSONAL COMPUTERS (H/PCS)
Since the mid -1990s, many new types of small personal computing devices have been introduced, and all fall under the category of handheld personal computers (H/PCs). These tiny systems are also called palmtop computers. A handheld PC can be any sort of computer that fits in the user’s hand, such as:
- Personal Digital Assistant (PDA)
- Cellular phone with internet, e-mail and fax capabilities
- H/PC Pro Device
PERSONAL DIGITAL ASSISTANTS (PDAs)
Personal digital assistants (PDAs) are among the smallest of portable computers. Often they are no larger than a small appointment book, but they are much less powerful than notebook or desktop computers. PDAs are normally used for special applications, such as taking notes, displaying telephone numbers and addresses, and keeping track of dates or agendas. Many PDAs can be connected to larger computers to exchange data. Depending on the model, PDAs may include the following features.
- Build in microphone and speaker, enabling the user to record speech digitally
- Personal information management (PIM) software
- Miniaturized versions of personal productivity applications
- Internet, fax, or e-mail software
Some new cellular phones are doubling as miniature PCs. Advanced cellular devices combines analog and digital cell-phone service with e-mail capabilities. Such phones enable the user to check and send e-mail and faxes over the phone. They offer features not normally found on a phone, such as personal organizers or access to the Web. Some models even break in half to reveal a miniature keyboard.
H/PC Pro DEVICES
Probably the most curious new development in handheld technology is the H/PC Pro family of devices. These systems are larger than PDAs or miniature notebooks, but they are not quite as larger as typical notebook PCs, with features somewhere between the two. For example, H/PC Pro systems boast nearly full-size keyboards and color displays. They can run more types of miniaturized applications than their smaller counterparts, but those applications still do not provide the features of normal desktop software. H/PC Pro units offer long battery life and instant on access (features still missing from many laptop systems), but they do not include disks, Although they will gain speed and storage capacity quickly, H/PC Pro systems offer very limited RAM and relatively slow processor speeds.