«Complex IT Environments: Ascertaining Information Integrity Piet J.M. Poos Ernst & Young EDP Audit Nijenrode University control client/server, ...»
Complex IT Environments: Ascertaining
Piet J.M. Poos
Ernst & Young EDP Audit
information integrity, general ledger,
measures, information architecture
In this paper we look at the effects on control measures of the change from
mainframe architecture to 2 tier client server toN-tier client server. We will see that
the traditional measures in a more or less flat file mainframe environment are no longer sufficient in a large (and complex) N-tier client server environment.
We propose middle ware solutions that will result in moving completeness controls away from the users and move these to the IT environment. To allow users to effect control over information integrity we propose to use the general ledger as a focal point for intersystem and interprocess reconciliation. This results in a new function in such an organisation: the process controller. He is the owner of the large majority of system interfaces and is responsible for the reliability of interfaces.
1. INTRODUCTION Developments in business processes and in IT have both been rapid in recent years.
Under the influence of Porter (value chain) [I] and of Hammer & Champy , business processes have been integrated within organisations as well as across organisations. The fast pace of change has led to a shift of emphasis away from mainly financial based management information to anon-financial variety.
IT departments of large organisations have been hard put to keep up with these business changes, which has been one of the reasons for the adoption of client/server applications, initially with a 2-tier client I server architecture, graduating later to an N-tier one. Business demands have led to data warehousing, increased use of ERP packages and the deployment of middleware solutions.
M. E. van Biene-Hershey et al. (eds.), Integrity and Internal Control in Information Systems © Springer Science+Business Media New York 2000 122 Integrity and Internal Control in Information Systems Business information
Figure 1: Integrity of redundant data.
The architecture of today's information systems differs from the architecture a decade ago. The mostly batch oriented centralised mainframe processing has gone through major changes. The new component based architecture still has to prove itself in mission critical applications, but the first of these types of systems are at this moment becoming operational. With cheap data storage we see a return of data redundancy (operational data, data warehouse, general ledger). In complex environments, this is already causing integrity problems (see Figure 1. Integrity of redundant data).
The scope of business information systems in large and complex organisations has changed. The systems and procedures that should have supported this change in the back-office however, haven't kept up. Indeed, most of our thinking on the necessary control measures in front-end applications and back office (accounting) systems has remained almost static. This has led to a widening gap between the necessary business- and information integrity controls on the one hand, and the perceived value of integrity controls by managers and systems developers alike on the other.
We will consider the business requirements that drive the increasing reliance on IT.
In order to understand the changing control measures, we will look at three different
system architectures: mainframe processing, 2-tier client I server architecture and Ntier architecture. For this paper the three architectures can be described as follows:
• Mainframe processing uses terminals or terminal emulation for input. It employs highly concentrated centralised processing, mostly batch oriented.
Hardware and operating system architecture includes IBM S-38 (currently IBM AS-400), all varieties of UNIX, Digital Vax, Tandem, IBM S-390 (such as DOSNSE and VM/MVS) and many others.
• 2-tier client I server splits processing in two. The majority of the application software runs on the client, which typically sends SQL requests to a database on a server. Usually for performance purposes, the server may have some stored procedures in the database. This architecture is also often called fat client because much of the application runs on the client. Server hardware and operating system architecture usually include some variety of UNIX and Complex IT environments: ascertaining information integrity Windows NT, although it is not uncommon to see a mainframe (e.g. IBM S-390 or Open VMS) as a database server. Client operating systems are generally MSIDOS, Windows 3.11, Windows 95 or Windows 98.
N-tier (3-tier) splits the processing load between clients running a graphical user • interface, an application server running the business application software, and a server running the database or a legacy system. In many cases, the business logic can be partitioned over many different servers. This architecture is also referred to as fat server or thin client architecture. Hardware and operating system architectures are similar to 2-tier ones. In addition, N-tier architecture usually employs middle ware to connect the various parts of the system together.
In this paper we will explore the changes that have taken place in business information systems and the effects these changes should have had on business process and information integrity controls. We will look at the way these business systems interface with the back office (accounting) systems and will investigate the effects of the introduction of middleware solutions. We will show how the integration of business systems with accounting systems (most notably the general ledger) can improve both business processes and information integrity controls.
For each type of architecture, we assume the same system blocks: maintaining fixed data, input, processing and output (see Figure 2. Basic process model). When talking about maintaining fixed data, we will mainly discuss the way product specifications become available to the user community. This changes rather dramatically, especially in N-tier environments. Where relevant we will also consider general controls in operational IT environments.
Figure 2. Basic process model.
This paper pursues two different chains of events. First, the development of system architecture from batch oriented processing, through real-time online processing, to the current wave of component-based design. Second, the development of the platforms on which software is deployed (mainframe, 2-tier client I server, N-tier client I server). Although many of the developments in both chains happened at more or less at the same time, the reality has never been as neatly compartmentalised as presented here. Real-time online processing is perfectly possible on a mainframe; indeed, most heavy duty processing still takes place on these systems. It Integrity and Internal Control in Information Systems is only recently that building mission critical systems on platforms other than a mainframe has become possible.
Information integrity can be defined as the availability of all correct and relevant information at the time and place where it is needed. The scope of this definition is much broader than the usual definition of data integrity. It covers not only reliability but also accessibility, as well as some aspects of effectiveness in the sense of information being delivered on time to the place where it is going to be used. In the course of this paper, the need for this broader definition will become clear.
Business information systems are systems that in financial institutions, for example, hold the transactions with the customer (such as mortgage systems and life insurance systems). The customer is the source and target for, respectively, most input and output.
We move back to the end of the seventies, beginning of the eighties, for a closer look at the architecture of business information systems and the influence this architecture had on both programmed and manual control activities. Most data processing was performed within huge mainframe applications that mainly used a batch-oriented architecture. In the banking industry, most branch offices had a network connection with the head office. This was usually intended to retrieve client information (such as account balances) and in some cases to transfer captured data for centralised batch processing (using an FTP-like protocol).
2.1.1. Maintaining fixed data Product information was maintained in various ways. All the distribution channels possessed product information in writing. The sales force was given the ubiquitous rate books to allow them to manually calculate rates and prices, each salesperson being responsible for the maintenance of his or her own rate book. The same products were also hard-coded into the information systems on the centralised computer at the head office. The programs were rigorously tested to verify the correctness of the product specifications used in their design, before being taken into production. For product developers, the product specifications were a static set of formulae for calculating rates, interest and capital. The process surrounding the product (selling, recording, collecting and paying money) was not an issue during product development.
It must be understood that in this period, product specifications were not thought of as a form of fixed data. In system terms, fixed data were interest and exchange rates, and all other data used for more than just one transaction.
Complex IT environments: ascertaining information integrity One of the major problems with this architecture concerned the availability of relevant client information. The entire architecture was product based, with each major product (group) having its own system. Automated interrelations between systems were virtually non-existent. The available customer databases were little more than the means to eliminate redundancy in customer address data.
2.1.2. Input At that time, most data capture took place in the branch office. New transactions (such as loan applications) were either written down and transported physically to head office, or they were entered into a system at the branch office to be communicated to the head office for processing by the systems at a later time.
Figure 3. Mainframe architecture.
Although branch offices used some processing systems, these were mainly for generating proposals. These stand-alone systems generally had no connection to the "real" business systems (at head office). Despite all the commotion made at that time about real-time processing, the majority of mainframe systems were very much batch-oriented, with only limited real-time inquiry facilities.
2.1.3. Processing As indicated earlier, most processing was batch oriented. This meant that all input had to be processed at night, as part of "end of day" processing. With the rising number of automated information systems, this put a heavy load on the mainframes.
Just as most data entry was batch oriented, most calculations involving active contracts were based on batch processing. Given the overall performance of mainframes, these calculations (such as interest calculations in a loan system, or renewals in an insurance system) were usually performed on a month-to-month basis. The only exceptions to the rule were contracts being finalised, or expiring during the month. Generally speaking, the number of contract forms was fairly limited, which meant that only a few different types of calculation had to be performed.
126 Integrity and Internal Control in Infonnation Systems 2.1.4. Output In fact, most output from these types of system is generated by processes discussed previously. Mortgage loan contracts, insurance policies, invoices, all these are generated either by processing or when the data is input. There are however some
forms of output that merit further discussion:
• Interfaces with other systems. Business processes do not stand alone but are interconnected in numerous ways to other processes. Given the batch-oriented environment, most, if not all, of these interfaces were also on a batch basis. It was not uncommon that these interfaces were almost entirely manual. Of all the interfaces with other business systems, the most notable was the one with the general ledger.
• Management information. At their inception, the majority of systems were able to provide most of the required operational information. Later, more information of a strategic nature was needed. This usually meant that the functions to generate management information were built on top of the operational database.
A large number of systems were developed in this way. Systems that, as we will see, were almost entirely self-managing, and contained all the necessary proof of the integrity of the data they held. As developments continued however, the demands on the systems became greater. Systems had to provide real-time processing and be accessible not only from within the organisation but also by customers. At the same time, the interdependencies between systems became far more numerous. This meant that end-of-day processing became almost unmanageable in its complexity.
The time was ripe for a major paradigm shift in architecture design.
2-Tier Client I Server Architecture 2.2.
Several things were happening simultaneously. On the business side, besides growing third-party access to the network, other developments were going on.
Firstly, a shift from product orientation to customer orientation was occurring. As most business systems were highly product oriented, this meant an enormous transformation of the application architecture. Secondly, users were demanding business systems that not only recorded transactions but supported the sales process as well. In addition, the pace was speeding up. The product life cycle in the eighties could be as long as five years; at the start of the nineties, this fell to about eighteen months.