«It wae the beet of tlmee, It wae the worst of times. Dickens, Tale of Two Cities, 1859 Today, many businesses, corporations, and institutions are ...»
4 Introduction and Overview
It wae the beet of tlmee, It wae the worst of times.
Dickens, Tale of Two Cities, 1859
Today, many businesses, corporations, and institutions are striving to optimize their
usage of computer resources. Rightsizing is the way to match available computer
resources to individual or corporate needs. This chapter introduces the concepts,
basic terminology, and major implementation strategies related to rightsizing.
WHAT IS RIGHTSIZING?Rightsizing is a new term for describing an old but elusive goal—balancing user needs against available technology and organizational resources. Many people in the computer world are understandably reluctant to accept rightsizing as a needed addition to an already crowded vocabulary. Such terms are often short-lived or merely the products of colorful advertising campaigns. The term rightsizing, how- ever, is very useful in describing the emerging architectures for computer systems, as we will soon see.
Rightsizing of a computer system matches user needs with available technol- ogy and resources, usually by moving software applications to the appropriate hardware platform. This balancing usually results in a redistribution of data pro- cessing, computing, and presentation tasks among the various computers within an organization. Successful redistribution leads to the most appropriate al- location and sharing of computer resources, for a given time.
Rightsizing is a natural extension of the evolution of shared computer sys- tems. In the early years of computers, the only resources capable of being shared were printers and floppy disks. As network technology developed, it became pos- sible to share data and files. Today, distributed software applications allow users to share programs, processors, and displays.
Rightsizing is at once a strategy and a process. As a strategy, it consists of the objective of optimizing the overall use of computer resources and the goals 3 4 Rart I Rightsizing Overview developing an appropriate schedule and budget. As a process, it consists of the specific steps—the activities and events—that will lead to the desired outcome.
In theory, rightsizing could lead to the migration, or moving, to a less powerful computer, depending upon the requirements and economics of the needed sys- tem. In practice, however, rightsizing almost always results in the movement to faster, more powerful networked systems. References  to  in the bib- liography at the end of the book provide in-depth discussions of various aspects of rightsizing.
MOTIVATION AND MISCONCEPTIONSSuch a potentially involved and time-consuming task as rightsizing would not be undertaken unless there was significant potential for cost savings and increased productivity. However, these benefitsare not always realized immediately, but rather over the long term, and the benefits of one project may not apply to other areas. Thus, the professionals involved with computer rightsizing need to be aware of some common misconceptions associated with such endeavors.
Since rightsizing may result in the complete redesign of a company's computer system and information structure, the potential benefits must be significant. The
most commonly cited benefits are as follows:
• Increased access to information. A key requirement in the information age is the ability to quickly access the most up-to-date information. Designing corporate computer systems to meet this need is one of the primary goals of rightsizing.
• Increased productivity. When computer resources are used to their best advantage, productivity increases are experienced by almost everyone, from software developers to end users. For example, not all applications need to be developed in a mainframe environment. Many can be created and even run on PCs. Similarly, end users typically enjoy faster response times on PC network systems than on their mainframe counterparts.
• Support of organizational changes. Since the 1980s, many corporations have eliminated their middle layers of management. This flattening of the organizational hierarchy was an attempt to reduce the isolation of upper management while empowering lower-level management with greater decision-making authority (Fig. 1-1). Rightsizing from overburdened mainframes to networked PCs connected to corporate database computers, provides the necessary quick access to the very best information pertinent to the required decisions.
Chap. 1 Introduction and Overview 5
1990s 1980s Figure 1-1 Organizational flattening.
Misconceptions Many misconceptions about rightsizing are based more upon the fears and wishes of those involved than on reality. Older system administrators, fearful that their many years of mainframe experience will no longer be needed, may equate rightsizing with the replacement of mainframe and minicomputers by workstations and PCs. In contrast, those who need computer-based information to do their jobs may view rightsizing as a way to speed the development of needed software application programs and reduce existing backlogs. Information managers, focused on the bottom line, may see rightsizing as the latest cost savings technique, anticipating quick returns.
Rightsizing is all and none of the above. While mainframe computers may be retired as a result of rightsizing, they may also be replaced by newer mainframes, or the existing mainframe may be moved to a secondary role, such as a database machine. Which architecture is chosen depends on a careful assessment of system needs, available technology, and cost of all the alternatives. Similarly, rightsizing will not result in immediate reduction of backlogs in development of application and data processing programs. It takes time for system administrators, programmers, and end users to learn and fully utilize newly rightsized systems. Finally, it can also take time, often years, to realize cost savings from the rightsizing process. But delaying this process can result in lost productivity and decreased market presence, as other, more aggressive companies, incorporate rightsized systems into their business.
MAJOR RIGHTSIZING COMPONENTS
The major components affected by rightsizing fall into three basic categories:
hardware, network systems, and software.
Computers are themselves a major class of hardware systems. Traditionally, all computers were classified as one of three types, listed here in terms of decreasing size and computing power: mainframes, minicomputers, and microcomputers (e.g., workstation and PCs) (see Fig. 1-2). However, with continuing technical advances in the size and power of CPUs (central processing units) and the storage capacity of memory chips, as well as less expensive manufacturing techniques, the distinction among these three groups has blurred. Smaller, low-end mainframes are now almost indistinguishable from high-end minicomputers, and the distinction between minicomputers and microcomputers is similarly blurred.
Mainframe computers, so named because they were originally built on a large chassis or "main frame," are the fastest, largest, and most expensive of the three computer system categories. Users input data and receive processed results from the computer through a terminal, a hardware device consisting of a display screen and keyboard. Thousands of terminals are typically connected to one central, mainframe computer.
Terminals should not be confused with microcomputers. Although both have similar peripheral devices, such as a monitor and a keyboard, terminals generally have far less computing power. There are three main types of terminals: dumb, smart, and intelligent. Dumb terminals can only send and receive data; they have no data processing capability. Smart terminals are the next step up, enabling the user to perform some basic data editing functions. Finally, intelligent terminals can send and receive data, and also run simple applications, associated with information display, independent of the mainframe computer. (Microcomputers connected to a host machine can serve as intelligent terminals.) The term host has many meanings. For example, when many dumb terminals are connected to one mainframe, the mainframe "hosts" the terminals by providing requested services, data storage, and input/output (I/O) resources. In a broader sense, a host computer is any physical system that interprets and runs software programs. These programs may have been written on other computers, called logical or virtual machines, that are attached to the host via a network. Thus, a host can be
a mainframe, minicomputer, or microcomputer attached to a network, depending upon the system.
Minicomputers, or minis, are smaller than mainframes but still too large to be portable. Their computing power, memory capacity, cost, and number of users supported are midrange between mainframes and microcomputers. Terminals are also needed to input and receive data from minicomputers.
Microcomputers, or micros, are relatively small in comparison to mainframes or minicomputers. The microcomputer category includes desktop machines, such as PCs or workstations; laptops, the briefcase-sized portables that can be held in your lap (see Fig. 1-3); and palmtop computers. Whatever the size, microcomputers are quickly becoming as fast and powerful as existing minicomputers and even some older mainframes.
Microcomputers are standalone systems, requiring no connection to a mainframe or minicomputer host. Whereas most hosts require slave terminals to input data and display the processed results, microcomputers perform all computerrelated tasks independently. PCs and workstations can be connected together via networks to share data and increase their overall power, but this is not essential.
In contrast, terminals are slaves to a host computer and cannot function separately, independent of a mainframe or minicomputer.
Although, as noted, the capabilities, performance, and cost of high-end PCs are converging with those of low-end workstations (see ), enough differences remain to justify classifying them as distinct types. Their differences can best be noted by comparing their configurations, as shown in Table 1-1. Basically, a PC is a single-user computer running under a DOS-Window or Macintosh operating system. It may utilize an Intel (DOS) or Motorola (Mac) CISC-based microprocessor and a Novell (DOS) or Appletalk (Mac) network. Furthermore, the networking capability of a DOS-Windows machine is an add-on, not built into the original system architecture. In contrast, a typical workstation is a multiuser computer running a multitasking operating system such as Unix, utilizing a RISC-based processing chip with built-in TCP/IP networking capabilities.
Workstations are well suited to run processor-intensive scientific and engineering applications because they are designed with reduced instruction set computer (RISC) processors. Unlike the complex instruction set computer (CISC) processors found in most PCs, RISC chips have fewer and simpler instruction programmed into them. (A comparison of the two processor types is given in Chap.
5, Rightsizing Computer Hardware and Operating Systems.) It should be noted that the differences cited above are generalizations, which do not always hold. For example, the PowerPC-based Macintosh has a RISC processor, but runs most of its CISC-based operating system applications using special software which emulates a CISC machine on top of a RISC processor. Also, Windows 95 from Microsoft has built-in networking capability. Further, these primary differences all but disappear when low-end workstations are compared with high-end PCs. In this case, the PCs perform more and more like workstations. For O E 2
example, many PCs will run a multitasking operating system like Unix or OS/2 on a TCP/IP network with a high-resolution screen. Likewise, many types of Unix systems can now emulate the DOS and MAC operating system.
Network Systems Almost all rightsizing strategies rely heavily on networks, which form the communication links between computers. Network-unique hardware and software are sufficiently complicated, and separate from the rest of the computer system, to form their own hardware category.
Historically, PCs have lacked networking capability. For example, DOS did not have operating system features like multitasking and network card drivers to support networking, so networking solutions consisted of software patches to DOS. This led to compatibility problems between networking software and other PC applications, and made PCs less reliable for networking than workstations built around Unix.
10 fart I Rightsizing Overview In the mainframe world networking is handled by the front-end processor (FEP); typically a microcomputer or minicomputer that handles most of the communication processing tasks for the host computer, usually a mainframe. By offloading the data communication input/output functions from the host, the FEP computer frees up the host to concentrate exclusively on data processing activities.
Typical FEP tasks include transmitting and receiving messages, error checking, serial to parallel conversions, and coordinating message switching. The back-end processor, relieved of the basic "housekeeping" data communication tasks by the front-end processor, handles mostly processor-intensive functions, for example, data storage and manipulation. Back-end processors are usually host machines and may run on a mainframe, minicomputer, or workstation connected to a network.
In a contemporary computing environment, software applications and processing tasks are split, or distributed, across many different machines, all of which are connected by some kind of network. Front-end and back-end processors reside on client and server computers, respectively. A server processes and services requests made by clients. For example, a spreadsheet application, running on a client desktop computer, may need certain data that is stored on a server minicomputer in another building. The client makes a request for this data, which is transmitted over a network connecting client and server. Once the server receives the client's requests, it locates the data and transmits it back to the client.