Chapter 5
Modeling and Analysis

Note: At the time of posting we were unable to insert figures.
These will be posted at a later date.

W5.1 Forecasting

W5.2 Linear Programming: The Blending Problem

W5.3 Regression Analysis Example in SPSS

W5.4 Table W5.1 Representative Software for Decision Analysis

W5.5 Table W5.2 Representative List of VIS Software with Animation

W5.6 Table W5.3 Representative List of Visual Interactive Modeling (VIM) and Visual Interactive Software Packages with Animation

W5.7 Table W5.4 Representative List of Ready-Made Specific DSS (Expanded Table 5.7)

W5.8 Virtual Reality Sources (As Related to VIS/VIM)

W5.9 DSS In Focus W5.1: Airlines Plan for Waiting Lines and Late Passengers

W5.10 DSS In Action W5.2: Simulation at the Indiana University Medical Center Evaluates Medical Resident Scheduling Policies

W5.11 Case Application W5.1 Visual Simulation for a Hospital Recovery Room

W5.12 Additional Exercises W5.1 - 3: W5.1 The Maximum Diversity Problem, W5.2 IFPS Aggregate Scheduling, W5.3 IFPS Financial Functions

W5.13 Major Term DSS Group Project 5.3 (Long Description)

W5.14 Appendix W5-A: A Spreadsheet-based Economic Order Quantity Simulation Model

W5.15 Appendix W5-B The Analytic Hierarchy Process (AHP) Using Expert Choice

W5.16 Appendix W5-C: Visual IFPS/PLUS

W5.17 Appendix W5-D: Visual IFPS PLUS and Spreadsheet Simulation Models

W5.18 Appendix W5-E:Waiting Line Modeling (Queueing) in a Spreadsheet

W5.20 Note

W5.1 Forecasting

Recall that decision making involves choosing an alternative course of action by evaluating the possible consequences of the alternatives. Although the choice is made today, the possible consequences often occur sometime in the future. Therefore, the quality of the decision largely depends on the quality of the forecast (see DeLurgio and Bhame [1991], and Jain [1996]).

Forecasting models are an integral part of many MSS. One can build a forecasting model or one may use preprogrammed software packages such as Autobox (Automatic Forecasting Systems, Warminster, PA), and ForecastMaster (Scientific Systems Inc., Woburn, MA), ForecastPlus (Stat Pac Inc., Edina, MN), FUTURCAST (Futurion Inc., Ridgewood, NJ) and SmartForecast (Smart Software Inc., Belmont, MA). Also, many MSS development tools (e.g., financial planning languages and spreadsheets) have some built-in forecasting capabilities.

The Uses of Forecasts

The major use of forecasting, as it relates to modeling, is to predict the value of the model variables (often demand), as well as the logical relationships of the model, at some time in the future. The future time of interest depends on "when" we want to evaluate the results (Broadie and Glasserman [1996]). For example, in an investment decision we may be interested in prices and income a year from today, while in a capital investment decision we may be interested in projected prices and income during the next five years. Generally speaking, we distinguish between three types of forecasts: (a) short run (up to one year), where the forecast is used mainly in deterministic (certainty) models, b) intermediate term (1-3 years); and (c) long run (beyond three years), where the forecast is used in both deterministic and probabilistic models.

Forecasting Models and Methods

What happens in the future usually depends on a multiplicity of factors, most of which are uncontrollable. Furthermore, data availability, accuracy, cost, and the time required to make the forecast also play an important role. Because of this, several successful forecasting methods have been developed.

Forecasting methods can be grouped in several ways. One classification scheme distinguishes between formally recognized forecasting techniques and informal approaches such as intuition, spur-of-the-moment guesses, and seat-of-the-pants predictions. Our attention in this section is directed to formal methods, which can be divided into four categories: judgment methods, counting methods, time-series methods, and association or causal methods. Each category is briefly discussed below. For a more detailed discussion, see Turban and Meredith [1994].

Judgment Method. Judgment methods are those based on subjective estimates and expert opinion, rather than on hard data. They are often used for long-range forecasts, especially where external factors (e.g., technological or political developments) may play a significant role. They also are used where historical data are very limited or nonexistent, such as in new product/service introductions. For example, the best scheduling of a new television show on a network requires the expert opinion of the program director (Reddy, Aronson and Stam [1997]). Expert systems may be developed and used. Decision aids like Expert Choice (see DSS In Action 5.1 and Appendix W5-B) may assist a manager in formulating forecasts.

Counting Methods. Counting methods involve some kind of experimentations or surveys of a sample data with an attempt to generalize about the entire market. These methods are primarily used for forecasting demand for products/services, a part of marketing research. This type of forecasting is quantitative, based on hard data and thus generally considered more objective than the previous types. Again, consider the example of television scheduling. Market sampling is performed to determine the markets to which certain shows or show types appeal. If "Monday Night Football" (which has a strong middle-aged male audience) is being aired on one network, a competing network might choose to air a show that appeals to women in the age range of 18 - 45. See Reddy, Aronson and Stam [1998] for details.

Time-series Analysis. A time series is a set of values of some business or economic variable, measured at successive (usually equal) intervals of time. For example, quarterly sales of a firm make up a time series, as does the population in a city (e.g., counted annually), the weekly demand for hospital beds, and so on. We undertake time-series analysis in decision making because we believe that knowledge of past behavior of the time series might help our understanding of (and therefore our ability to predict) the behavior of the series in the future. In some instances, such as the stock market, this assumption may be unjustified. See Figure W5.1 for an Excel spreadsheet example of exponential smoothing.

figw5_1.gif

FIGURE W5.1 Excel Spreadsheet Model of an Exponential Smoothing Forecasting.

Association or Causal Methods. Association or causal methods include data analysis for finding data associations and, if possible, cause-effect relationships. They are more powerful than the time-series methods, but they are also more complex. Their complexity comes from two sources: First, they include more variables, some of which are external to the situation. Second, they use sophisticated statistical techniques for segregating the various types of variables. Causal approaches are most appropriate for intermediate term forecasting.

Generally speaking, judgment methods, which are subjective in nature, are used in those cases where quantitative methods are inappropriate. Time pressure, lack of data, or lack of money may prevent the use of quantitative models. Complexity of historical data (due to interactions or fluctuations, for example) may also inhibit the use of historical data.

Keep in mind the following:

• Most models require some forecasting, but no one is a perfect forecaster.
• Neural network-based forecasting methods are becoming increasingly more important.
The latest development in improved forecasting techniques is the use of neural networks (see Chapters 17 and 18), and examples of methods and applications in Hill et al. [1996].

Forecasting Questions for Discussion

1. Describe judgmental forecasting methods.

2. What is time-series analysis?

3. What are causal forecasting models?

Forecasting Exercise

1. Explain how to perform forecasting in a spreadsheet.

Forecasting Packages

Autocast II
Delphus, Inc., Morristown, NJ

Forecast Pro for Windows
Belmont, MA

SmartForecast
Smart Software, Inc.
Belmont, MA

Solo Statistical Software
BMDP Statistical Software, Inc.
Los Angeles, CA

Forecasting References

Broadie, M. and P. Glasserman. (1996, February). "Estimating Security Price Derivatives using Simulation. Management Science. Vol. 42. No. 2. 269-285.

DeLurgio, S. and C. Bhame. (1991). Forecasting Systems for Management. Homewood, IL: Business One Irwin.

Hill, T., M. O’Connor and W. Remus. (1996, July). "Neural Network Models for Time Series Forecasts. Management Science. Vol. 42. No. 7. 1082-1092.

Jain, C. L. (1996, Summer). "Monthly Corporate Forecast Meeting." Journal of Business Forecasting Methods & Systems. Vol. 15. No. 2. p. 2, 28.

Lim, J. S. and M. O’Connor. (1996). "Judgmental Forecasting with Interactive Forecasting Support Systems. Decision Support Systems. Vol. 16. 339-357.

Reddy, S., J. E. Aronson and A. Stam. (1998). "SPOT - Scheduling Programs Optimally for Television." Management Science. Vol. 44. No. 1. January. 83-102.

Turban, E. and J. Meredith. (1994). Fundamentals of Management Science, 6th ed. Homewood, IL: Irwin.

W5.2 Linear Programming: The Blending Problem

In Chapter 2, we presented a simple product-mix problem and formulated it as an LP. Here, we introduce another typical LP problem called the blending problem.

LP Example: The Blending Problem (Minimization). In preparing Sungold paint, it is required that the paint have a brilliance rating of at least 300 degrees and a hue level of at least 250 degrees. Brilliance and hue levels are determined by two ingredients, Alpha and Beta. Both Alpha and Beta contribute equally to the brilliance rating; one ounce (dry weight) of either, produces one degree of brilliance, in one drum of paint. However, the hue is controlled entirely by the amount of Alpha; one ounce of it producing three degrees of hue in one drum of paint. The cost of Alpha is 45 cents per ounce, and the cost of Beta is 12 cents per ounce. Assuming that the objective is to minimize the cost of the resources, then the problem is to find the quantity of Alpha and Beta to be included in the preparation of each drum of paint.

Formulation of the Blending Problem. The decision variables are:

x1 = Quantity of Alpha to be included, in ounces, in each drum of paint

x2 = Quantity of Beta to be included, in ounces, in each drum of paint

The objective is to minimize the total cost of the ingredients required for one drum of paint. Since the cost of Alpha is 45 cents per ounce, and since x1 ounces are going to be used in each drum, then the cost per drum is 45 x1. Similarly, for Beta the cost is 12 x2. The total cost is, therefore, 45 x1 + 12 x2, and, as our objective function, it is to be minimized, subject to the constraints (relationships among the variables) of the following specifications:

1. To provide a brilliance rating of at least 300 degrees in each drum. Since each ounce of Alpha or Beta increases the brightness by one degree, the following relationship exists:

Supplied by Alpha Supplied by Beta Demand

1x1 + 1x2 ³ 300

2. To provide a hue level of at least 250 degrees, the effect of Alpha (alone) on hue can similarly be written as:

Supplied by Alpha Supplied by Beta Demand

3x1 + 0x2 ³ 250

3. Negative quantities of alpha and beta are not allowed (one cannot remove nonexistent chemicals from a drum of point), so, we have nonnegativity constraints that are written as:

x1 ³ 0

x2 ³ 0

In summary, the blending problem is formulated as follows: Find x1 and x2 that

Minimize z = 45 x1 + 12 x2

subject to

1 x1 + 1 x2 ³ 300 (brightness specification)

3 x1 + 0 x2 ³ 250 (hue specification)

x1 , x2 ³ 0 (nonnegativity)

Solution. The model is shown in the Excel Worksheet in Figure W5.2 and its solution using the Solver Tool is shown in Figure W5.3. The optimum found by Solver in Excel and Lindo is
x1 = 83.333
x2 = 216.667
Total cost = \$63.50

Note: The solution that is good for one drum will be correct for many drums as long as capacity or other constraints are not being violated. Optimization models are frequently included in decision support implementations as is shown in DSS In Action 5.2 and 5.3.

Figw5_2.gif

FIGURE W5.2 Excel Spreadsheet: Initial Model of the Linear Programming Blending Problem.

Figw5_3.gif

FIGURE W5.3 Excel Spreadsheet: Solver Solution Run for the Linear Programming Blending Problem.

Figw5_4.gif

Here are the Results of a Run of Lindo in Solving the Blending Problem.

FIGURE W5.4 Lindo Run: Linear Programming Model of the Blending Problem (TAKE File) and Output (DIVERT File).

General LP Formulation and Terminology

Let us now generalize the formulation. Every LP problem is composed of:

Decision Variables. The variables whose values are unknown and are searched for. Usually they are designated by x1, , x2 , and so on.

Objective Function. This is a mathematical expression, given as a linear function, that shows the relationship between the decision variables and a single goal (or objective) under consideration. The objective function is a measure of goal attainment. Examples of such goals are total profit, total cost, share of the market, and the like.

If the managerial problem involves multiple goals, one can use the following two-step approach:

1. Select a primary goal whose level is to be maximized or minimized.

2. Transform the other goals into constraints indicating acceptable lower and upper limits, which must only be satisfied.
For example, one may attempt to maximize profit (the primary goal) subject to a growth rate of at least 12 percent per year (a secondary goal). There are many alternative approaches to multiple-objective optimization. One is to weight the goals based on importance into a single objective function. Another seeks to obtain the decision maker’s economic utility functions for each goal. This is beyond the scope of this material.

Optimization. Linear programming attempts to either maximize or minimize the value of the objective function, depending upon the model’s goal.

Coefficients of the Objective Function. The coefficients of the variables in the objective function (e.g., 45 and 12 in the blending problem) are called the profit (or cost) coefficients. They express the rate at which the value of the objective function increases or decreases by including in the solution one unit of each of a corresponding decision variable.

Constraints. The maximization (or minimization) is performed subject to a set of constraints. Therefore, linear programming can be defined as a constrained optimization problem. These constraints are expressed in the form of linear inequalities (or sometimes equalities). They reflect the fact that resources are limited, or the constraints specify some requirements; and that the variables are related strictly through constants multiplied by variables.

Input-Output (Technology) Coefficients. The coefficients of the constraints' variables are called the input-output coefficients. They indicate the rate at which a given resource is depleted or utilized. They appear on the left-hand side of the constraints.

Capacities. The capacities (or availability) of the various resources, usually expressed as some upper or lower limit, are given on the right-hand side of the constraints. The right-hand side also expresses minimum requirements.

Example. These major components of a linear programming model are illustrated for the blending problem:

Find x1 and x2 (decision variables) that minimize the value of the linear objective function z:

z = 45 x1 + 12 x2
(cost coefficients are 45 and 12
decision variables are x1 and x2 )

subject to the linear constraints:

1 x1 + 1 x2 ³ 300
3 x1 + 0 x2 ³ 250
(input-output coefficients are 1,1; 2,0
capacities or requirements are 300 and 250)

Optimization functions are available in many DSS tools. However, optimization packages are available as add-ins for Excel and other DSS tools. Also, it is relatively easy to interface other optimization software with Excel, database management systems and similar tools.

W5.3 Regression Analysis Example in SPSS

Figw5_5.gif

FIGURE W5.5 Regression Analysis Performed by SPSS.

W5.4 TABLE W5.1 Representative Software for Decision Analysis.

 Name of Package Vendor Description Criterium Decision Plus InfoHarvest Inc.  Seattle, WA  http://www.halcyon.com Decision trees, Multicriteria decision making (MCDM) DecideRight Avantos Performance   Systems Inc.  Emeryville, CA  http://www.avantos.com Multiple choice decision making Decision Analysis by TreeAge (DATA) TreeAge Software Inc.  Williamstown, MA  http://www.treeage.com Decision analysis and trees, Markov processes, simulation DecisionPro Vanguard Software Corp.  Cary, NC  http://www.vanguardsw.com Decision tree analysis, simulation, forecasting, optimization Demos Lumina Decision Systems, Inc.  Los Altos, CA  http://www.lumina.com MCDM DPL Applied Decision Analysis (ADA)  Menlo Park, CA  dpldept@adainc.com Decision analysis, tables, trees, influence diagrams DS Lab DS Group, Inc.  Greenwich, CT Visual Spreadsheet Excel Microsoft Corp.  Bellview, WA  http://www.microsoft.com Spreadsheet Expert Choice Expert Choice Inc.  Pittsburgh, PA  http://www.expertchoice.com Multiple choice decision making Logical Decisions Logical Decisions Group  Golden, CO MCDM, Integrates a number  of techniques PrecisionTree Palisade Corporation  Newfield, NY  http://www.palisade.com Decision trees and influence diagrams in a spreadsheet @RISK Palisade Corporation   Newfield, NY  http://www.palisade.com Risk analysis, sensitivity analysis, simulation Solver Frontline Systems Inc.  Incline Village, NV  http://www.frontsys.com Linear and Nonlinear Programming in Microsoft   Excel and Lotus 1-2-3 VISA Visual Thinking International Ltd.  Mississauga, ON  http://www.vti.co.uk/vti Visual MCDM Visual IFPS / Plus Comshare Inc.  Ann Arbor, MI  http://www.comshare.com Financial Planning Language What’sBest! Lindo Systems Inc.  Chicago, IL  http://www.lindo.com Linear and Integer Programming Which & Why Arlington Software Corp.  Montreal, Quebec  http://www.arlingsoft.com Multiple choice decision making

For further details, see Buede [1996], and Greenfield [1996].

Buede, D. (1996, August)."Decision Analysis Software Survey: Aiding Insight III," OR/MS Today. 73-79.

Greenfield, L. (1996, November 17). "Decision Analysis Tools," The Data Warehousing Information Center Web page: pwp.starnetinc.com/larryg/decision.html).

W5.5 TABLE W5.2 Representative List of Interactive Simulation (VIS) Packages

 Name of Package Vendor AIM,  AweSim!,  Factor,   SLAM, Pritsker Corp.  Indianapolis, IN  http://www.pritsker.com ADAS Cadre Tech., Inc.  Providence, RI Cinema Animation Systems,  ARENA,  SIMAN Systems Modeling Corp.  Sewickley, PA  http://www.sm.com COMNET III,  MODSIM III,  SimScript,  Simprocess CACI Product Co.  Arlington, VA  http://www.caci.com EXTEND Imagine That, Inc.  San Jose, CA  http://www.imaginethatinc.com GPSS/H Wolverine Software Corp.   Annandale, VA GPSS/PC Minuteman Software  Holly Springs, NC  http://www.mindspring.com/~minutemn/home.htm Micro Saint / ActionView Micro Analysis & Design Inc.  Boulder, CO  http://www.madboulder.com ModelWork,  SIMKIT Intellicorp  Mountainview, WA  http://www.intellicorp.com ProModel ProModel Corp.  Orem, UT  http://www.promodel.com VisSim,   VISUAL Visual Solutions  Westford, MA  http://www.vissim.com WITNESS The Lanner Group (AT&T)  Solon, OH

W5.6 TABLE W5.3 Representative List of Visual Interactive Modeling (VIM) and Visual Interactive Simulation (VIS) Packages with Animation

 Name of Package Vendor Description CyberVision  ExoDIS  DISplay Computer Explorations, Inc.  Huntsville, AL  http://www.exodis.com Visual development simulation modeling, visual models, visual simulation, visual viewer for distributed interactive simulation environments, virtual reality system Distributed Interactive Simulation  Dynamic Animation Systems  Immersive Environments Dynamic Animation Systems Inc.  Fairfax, VA  http://www.d-a-s.com Visual 3-D environments and simulations EVS-NT PRO  EVS-SIM C Tech Development Corp.  Newport Beach, CA  http://www.ctech.com Environmental Visualization Systems: geostatistical analyses and visualization of environmental, mining, archeological, geology and other 3-D data; 3-D visual modeling and simulation Extreme Simulators Synthetic Environment Engineering, Inc.  Kittery, ME  http://www.argo.net/~ajduros/sec/faf1.htm Visual interactive environments MultiGen II Pro  GameGen II  SmartScene  ModelGen MultiGen Inc.  San Jose, CA  http://www.multigen.com Visual interactive simulation, visual interactive modeling, environment creation, video game environments ORCA Visual Simulation Environment Orca Computer, Inc.  Blacksburg, VA  http://www.OrcaComputer.com Visual interactive modeling,  Visual interactive simulation RealiView  RealiMation  Space-Time Editor DataPath Limited  Derby, England  http://www.realimation.com Visual models, computer animation VASM Virtual Animation System Management  Albuquerque, NM  http://www.fmsm.com 3-D modeling, computer rendering, computer animation

(Partial List is in Table 5.7 on page 181 of the Textbook)

 Name of Package Vendor Description AutoMod,  AtoSched AutoSimulations  Bountiful, UT  http://www.autosim.com 3 D true-scale walk-through animations for manufacturing and material handling;  Short interval manufacturing scheduling and capacity analysis Budgeting & Reporting Helmsman Group, Inc.  Plainsboro, NJ  http://www.helmsmangroup.com Financial data warehousing FACTOR/AIM  PACKAGING Pritsker Corp.  Indianapolis, IN  http://www.pritsker.com Manufacturing simulator with costing capabilities,  High speed/high volume food and beverage industry simulator Factory Software CIMTECHNOLOGIES CORP.  Ames, IA  http://www.cimtech.com Facilities layout simulator MedModel,  ServiceModel ProModel Corp.  Orem, UT  http://www.promodel.com Healthcare simulation software,  Service industry simulation software OIS Olsen & Associates Ltd.  Zürich, Switzerland  http://www.olsen.ch Directional forecasts,  trading models,  risk management OptiPlan Professional,  OptiCaps,  OptiCalc Advanced Planning Systems, Inc.  Alpharetta, GA Supply chain planning Taylor II F & H Simulations  Orem, UT  http://www.taylorii.com Manufacturing and material handling simulation environment PLANNING WORKBENCH Proasis Ltd.  Chislehurst, Kent, England  http://www.proasis.co.uk Graphically-based planning system for the process industry PROVISA The Lanner Group (AT&T)  Solon, OH Finite capacity scheduling for manufacturing Simulation Expert Visual Logic  http://www.visual-logic.com Enterprise modeling Solver,  Scheduler,  Planner ILOG  Mountain View, CA Constraint-based reasoning engine for budget resource allocation StatPac Gold Stat Pac Inc.  Edina, MN  http://www.statpac.com Survey analysis package TRAPEZE Trapeze Software Group  Mississauga, ON  http://www.trapsoft.com Planning, scheduling and operations for fixed route, demand-responsive and flexible route operations TruckStops,  OptiSite,  BUSTOPS MicroAnalytics, Inc.  Arlington, VA Distribution management and transportation

W5.8 Virtual Reality Sources (As Related to VIS/VIM)

 Lightwave 3D  BendingLight VResources  http://www.imaginative.com Animation, 3-D design Extreme Simulators Synthetic Environment Engineering, Inc.  Kittery, ME  http://www.argo.net/~ajduros/sec/faf1.htm Visual Interactive Environments Consulting  VR Space/Management Programs Inc.  http://www.space-chi.com Virtual Reality and Visual Simulation  Design Firm RealiView  RealiMation  Space-Time Editor DataPath Limited  Derby, England  http://datapath.co.uk Computer Animation  Visual Models ZFX Animation  Kingsport, TN Animation, Visual Simulation

 W5.9 DSS In Focus W5.1: Airlines Plan for Waiting Lines and Late Passengers  In the United States at airline ticket counters, airline customers do arrive somewhat randomly, though an even distribution of takeoff and landing times can smooth this out sometimes (and the airline has some control over this). A common heuristic rule used at the ticket counters is to take passengers on a first-come-first-serve basis. But, if a passenger’s flight is scheduled to leave within 30 minutes, he or she gets a higher priority based on the time until the flight leaves (less time means a higher priority). Thus the airline fills its airplanes and passengers who tend to run late can often (but not always) make their flights. (Note - there is never a guarantee of being seated if you arrive less than 30 minutes before take off - don’t try this for real!).

 W5.10 DSS In Action W5.2: Simulation at the Indiana University Medical Center Evaluates Medical Resident Scheduling Policies  Society has demanded reform in medical resident work scheduling; consequently, hospitals are implementing changes having organizational, clinical, financial, social, emotional, and educational consequences for physician training and patient care. At Indiana University Medical Center, simulation modeling was used to evaluate the outcomes of alternative designs prior to implementation. Mobile resources such as physicians with complex job descriptions and patients with time-varying arrival processes complicated the modeling task. Further, special rules concerning the physicians’ legal rest requirements and limits on being on-call complicate the situation. A flexible, powerful simulation language helped to model resource decision rules and the frequent preemptions of less urgent activities as more urgent requests arise. A distribution fitting package enhanced the synthesis of data from diverse sources into distributions that adequately modeled input processes. The resulting simulation model was used to examine alternatives in the design of a new housestaff work schedule. This model provided valuable guidance to hospital administrators and residency program directors in planning their schedules and resources efficiently and effectively.     (Source: Condensed from Dittus, R. S., R. W. Klein, D. J. DeBrota, M. A. Dame and J. F. Fitzgerald. (1996 June). "Medical Resident Work Schedules: Design and Evaluation by Simulation Modeling." Management Science. Vol. 42. No. 6. 891-906.)

W5.11 CASE APPLICATION W5.1: Visual Simulation for a Hospital Recovery Room

An often crowded recovery room led to a study of the main operating theatre suite in a large British general hospital which consists of six operating theatres together with a common post-operative recovery unit. Management was concerned about the level of throughput of the system. Senior medical staff felt that insufficient recovery spaces had been provided when the suite of theaters had been built which, if correct, was a physical constraint requiring considerable financial investment to remedy. They also thought that shortage of porters (to get the patients from the wards to theatre and back) and recovery nurses were current operational problems.

The study was undertaken to identify delays and restrictions in the system (due to resource imbalances) by using a visual interactive simulation (VIS) model to explore the relationship between various throughput levels and resource allocations. In particular, the VIS model was used to investigate extreme loadings of the system because of the very high capital cost that would be required if additional recovery spaces were to be provided. In addition, and indeed before being able to consider the "hard" aspects of such a system, the researchers had to understand the relationships among the various groups of health-service staff who constituted the "soft" part of the system.

While there was no evidence of delays due to the shortage of recovery spaces there were occasions when all recovery spaces were in use, i.e. when the system was at its limit. Any increase in throughput was therefore likely to lead to delays due to the non-availability of recovery spaces. To investigate the effects of throughput increases, a VIS model of the system was constructed.

A major benefit of using visual interactive simulation in hospitals is that it enables the medical, administrative and nursing staff to understand suggested changes to the system being modeled more easily. This proved to be the case in this study also.

Visual interactive simulation has two prime advantages: first it enables those who work in the system being modeled to validate the model i.e. to become convinced that the model is a reasonable and accurate representation of reality; second, it provides a convenient means for the decision makers to consider different ways of running the system. Simulation cannot provide an answer to "What is the best feasible solution?" A process of trial and error or intelligent guesswork has to be used to get an understanding of the inter-relationships and then some fine tuning can follow to obtain better solutions. Simulation allows experimentation in a safe environment for testing new scenarios before deciding which changes, if any, should be implemented. The model turned out to be a complicated scheduling problem involving theatres, surgical teams, anesthetists, recovery nurses and porters.

An advantage of VIS is that those with an intimate knowledge of the actual system can provide some measure of validation by checking how well what is happening on screen captures reality. The model was a reasonable representation of reality as was justified in that there were no major differences between the observations and the outputs from the model.

Several scenarios of the model were run to check the effect of varying the number of porters and scheduling more sessions (operations) per week. An increase of the number of scheduled sessions by 28 per cent, from 47 to 60 per week, resulted in one scenario in an increase in patients by 22 per cent. This extra workload was capable of being dealt with by an 11 per cent increase in nursing recovery hours with a similar level of service as measured by the average wait of patients due to the unavailability of nurses. There was very little evidence of delays due to unavailability of recovery spaces, but there was a proportional increase in the number of occasions when all spaces were in use.

Given that in general it was found that the available recovery spaces were sufficient to service the six theatres when all were scheduled and the very large capital cost represented by a modern six-suite theatre block, it would seem worth further investigation as to whether or not more of the necessary support staff should be recruited by the hospital management to make full use of it.

A worst case scenario in the recovery unit was modeled and tested. To eliminate the porters as a constraint, six were allocated in the model. Various combinations of the numbers of spaces and recovery nursing hours were tried and a typical equilibrium result required 14 spaces and 485 recovery nursing hours.

In theory, very large increases in throughput can be accommodated with quite modest increases in the number of recovery spaces. Recovery spas, including building work and equipment, are very expensive to provide and this very unlikely worst case scenario demonstrates that the maximum number of any new spaces should be limited to five, which was seven less than originally believed to be necessary.

The major benefit of this study was that it resulted in a much greater understanding of the interconnected complexity of the overall system on the part of the various groups of staff involved. There is no simple answer to the original problem of increasing throughput.

The model yields useful information for such decisions in terms of the likely impact on the flow of patients of one less or one more porter and of similar changes in recovery nursing staff. The worst case scenario also helps management to put an upper limit on any proposed increase in recovery spaces. Essentially, the VIS model was used to focus on the issues and discover that no increase in facility size was necessary -- just better scheduling.

(Condensed from W. E. McAleer, J. A. Turner, D. Lismore and I. A. Naqvi, "Simulation of a Hospital’s Theatre Suite," Journal of Management in Medicine, Vol. 9, No. 5, 1995, 14-26.)

Case W5.1 Questions

1. Why is this scheduling problem so complex?

2. What are the real issues in this case? At the start of the study, did the stakeholders really understand what options were feasible? Why or why not?

3. How did the visual interactive simulation model help the hospital’s management focus on the issues?

4. How did the visual interactive simulation model get the various stakeholders to buy into its results?

5. How could the VIS developers modify their model to describe emergency room operations or the cafeteria operations?

W5.12 Additional Exercises W5.1 - 3

W5.1. The Maximum Diversity Problem and a Heuristic. A new optimization model, called the Maximum Diversity Problem, describes the situation of selecting a group of m items (people, etc.) from a total set of n, while trying to maximize the interaction or differences among the items selected. By way of a fictional illustrative example, the mayor of the city of Athens, GA, Gwen, wants to form a diverse committee to investigate an important matter and provide advice. She has selected 7 potential members and wants to choose 3 of the 7 (usually problems like this are much larger and involve selecting on the order of 10 people from a group of 100 or more). Her expert staff has looked into a number of factors that can contribute to diversity. They might be income level, gender, race, political party, membership, age, job function, location of home, rent vs. own home, education area, education level, etc. (these are not meant to be discriminatory -- items like these contribute to diversity). From these individual items for each person, her staff has scaled and combined the results in a weighted fashion to develop a measure of diversity (or dissimilarity), dij = the ‘interaction’ between person i and person j. All the measures are compiled into the diversity matrix below:

Diversity Matrix
 i/j 1 2 3 4 5 6 7 1 - 4 3 1 2 6 4 2 - - 4 6 9 2 3 3 - - - 5 8 3 1 4 - - - - 4 3 5 5 - - - - - 9 8 6 - - - - - - 5 7 - - - - - - -
So, if say we choose people 1, 3, and 6, 1 interacts with 3 at a ‘value’ of 3, 1 interacts with 6 at a value of 6, and 3 interacts with 6 at a value of 3 for a total interaction of 3 + 6 + 3 = 12. Although Mayor Gwen’s talented staff can develop the measures, they cannot solve the problem directly, because this is actually one of those large-scale combinatorial problems (many possible solutions to explore).

a. Fortunately, heuristics can be applied to obtain relatively close to optimal solutions. Identify a high quality solution by selecting 3 out of the 7 potential candidates. How did you make the selection? Describe the ‘heuristic’ you developed and used. What heuristic methods did not work well?

b. Write a computer program to evaluate all possible solutions (and store the best solution found at every iteration). Let it run for about 15 minutes. (Figure this out by solving 100 solutions, and then determine how many you need to get to 15 minutes.) Did it evaluate all the possibilities? How close is its best solution to yours? Try a larger problem, say choose 5 out of 20 people (use random data).

c. Write a computer program to execute your heuristic. Compare it’s results to those from the program developed in part b. Can you improve upon it?

W5.2. Aggregate scheduling is a methodology commonly used in job shops. It is used to determine the total resources that must be available for a given time period. Suppose we want to evaluate several policies of workforce and how it affects inventory and the ability to meet demand at the Harris Machining Company. We have forecasted demand for the next 6 months of 420, 360, 390, 350, 420, and 340 for the Harris wire testing machines (WTMs). Each employee can produce 10 WTMs per month. The initial inventory is assumed to be zero. The basic model (Adapted from P. Gray, Visual IFPS for Business, Prentice Hall, Upper Saddle River, NJ, 1996.) is as follows:

COLUMNS 1..6
DEMAND = 420, 360, 390, 350, 420, 340
PRODUCTION = DEMAND
WORK FORCE = PRODUCTION / 10
CHANGE IN WORKFORCE = 0, WORKFORCE - PREVIOUS WORKFORCE
CURRENT INVENTORY = PRODUCTION - DEMAND
TOTAL INVENTORY = CURRENT INVENTORY,’
PREVIOUS TOTAL INVENTORY + CURRENT INVENTORY

(Note that this model may be implemented quite readily in Excel.)

a. Implement model TRADEOFF in IFPS and evaluate the following three policies:

1. The Base Case in the model, the WORKFORCE = PRODUCTION / 10
2. What if Case 1: PRODUCTION = 380
3. What if Case 2: PRODUCTION = 390

Describe the trade-offs among overtime, workforce and inventory/backordering that you discovered. Plot the results.

b. Suppose the unit cost of hiring an employee is \$50, firing is \$70, unit inventory costs \$5.00 per month, and backordering costs \$7.00 per month. Producing a single WTM costs \$25. Add costs to the model and redo the scenarios in part a. Which of the three cases is the lowest cost alternative?

c. Suppose demand is now considered under risk and follows a uniform distribution ranging from 300 to 400. Extend the models and try the three policies outlined in part a. Include the costs in the model as in part b.

d. Implement the TRADEOFF model in a spreadsheet. Was it easier or harder to do?

e. Model this problem as a (dynamic) linear programming problem and solve it.

W5.3. Consider the following IFPS model (Adapted from P. Gray, Visual IFPS for Business, Prentice Hall, Upper Saddle River, NJ, 1996):

MODEL FUNCT

COLUMNS 1..4
INVESTMENT = 1000, 0
RETURNS = 400, 500 FOR 2, 600
DISCOUNT RATE = 15%
NET PRESENT VALUE = NPVC(RETURNS, DISCOUNT RATE, INVESTMENT)
NET TERMINAL VALUE = NTV(RETURNS, DISCOUNT RATE, INVESTMENT
INTERNAL RATE OF RETURN = IRR(RETURNS, INVESTMENT)
GROWTH OF RETURNS = GROWTHRATE(RETURNS)
RECENT GROWTH = GROWTHRATE(RETURNS,2)
\
\ BENEFIT / COST RATIO
\
BENEFITS = NPVC(RETURNS, DISCOUNT RATE, 0)
COSTS = -NPVC(0, DISCOUNT RATE, INVESTMENT)
BCRATIO = BENEFITS / COSTS

(Note that this model may be implemented quite readily in Excel.)

a. Identify the following:

1. The decision variables
2. The result variables
3. The uncontrollable variables
4. The mathematical relationships.

b. Implement and solve this problem.

c. What discount rate yields a NET PRESENT VALUE of 0 in year 4? Use what if analysis and goal seeking to identify this value. Which is easier?

d. Prepare an influence diagram to describe this model.

e. Modify this problem to be under assumed risk. Suppose the returns are actually distributed according to a triangular distribution with the median (peak) equal to the values in the model already, the low value 10 percent less than the median, and the high point 20 percent greater than the median). Implement and solve this new problem in IFPS. (Hint: the triangular distribution is done with TRIRANDR(low,mid,high), where low = the lowest possible value, high = the highest possible value, and mid is the median, the peak.)

f. Implement this problem (parts b and e) in a spreadsheet and compare the level of difficulty in doing both.

W5.13 Major Group Term DSS Project 3 (Long Description)

Part 1: Learn about the Model

The transportation problem is a classical linear programming specialization that has had major impact in commodity distribution since it was first solved in 1941. The model essentially indicates what to ship where at a minimum cost. There are supplies, demands and known unit costs of shipping. For example, the Swissvale Soup Company may be shipping crates full of soup from two factories or supplies (Pittsburgh and San Francisco - cities numbered 1 and 2) to two regional warehouses or demands (Dallas and Boston - cities numbered 3 and 4). The raw data describing this problem on a weekly basis can be stated in the following table:

 Unit Shipping Costs To Demand 3 Demand 4 From Dallas Boston Supplies (Crates): Supply 1 Pittsburgh \$15 \$12 130 Supply 2 San Francisco \$18 \$23 210 Demands (Crates): 200 140
Pittsburgh has 130 crates of Soup available per week, while San Francisco has 210. Dallas requires 200 crates, while Boston requires 140. The per unit shipping costs are \$15 from Pittsburgh to Dallas, etc.

The problem may be represented on a graph and is one of the best-known network programming problems:

figwtp.gif

The linear programming model of this problem may be defined as:

Let xij = the amount shipped from supply i (i = 1,2) to demand j (j = 3, 4); cij = the unit shipping cost from supply i to demand j (see the table); si = available supply of crates of soup at supply i, and dj = soup crates required at demand j.

There is a variable for every potential pair of cities for which ‘flow’ may occur. There is a constraint for every supply and every demand point (node). The supply constraints ensure that each supply provides its available resource to the system, while the demand constraints ensure that each demand point draws its needed requirements from the system. The objective is to minimize the total cost of shipping.

Minimize Total Cost = 15 x13 + 12 x14 + 18 x23 + 23 x24

Subject to

Supply 1: x13 + x14 = 130

Supply 2: x23 + x24 = 210

Demand 3: x13 + x23 = 200

Demand 4: x14 + x24 = 140

and the nonnegativity constraints are:

xij ³ 0, for i = 1, 2; j = 3, 4.

a. Model and solve this problem in Excel. First set this up and use trial-and-error to try to establish a solution. Then, use the Solver Tool. Also solve this in a dedicated Linear Programming (such as Lindo or QSB 2+) or Network Programming (or Transportation Problem Solver) (such as QSB 2+ or NETFLO) Package.

b. Note that the total supply must equal the total demand. If there is too much supply, we add a dummy demand city (with a demand equal to the excess supply). The excess supply would simply not be produced and shipped. If it ‘must’ be, then, let the model decide the most efficient way to cut back the factories’ production. Similarly, if there is too much demand, we add a dummy supply with a capacity to provide the missing soup. Conceptually, that means that there will be customers going without their soup when supplied from the fictional warehouse. (It is possible to identify alternative sources that could supply the product and use the actual cost of shipping plus whatever additional costs are incurred - the penalty - for using it.)

Suppose both Pittsburgh and San Francisco each have 200 crates of soup available per week for shipping. Model and solve this problem (don’t forget the dummy demand). Then, examine the effect of lowering the available supplies of Pittsburgh and San Francisco by equal amounts (to 170 each) - solve this problem, and compare the results to the solution to the original problem. What happened?

c. Since this is only one week of Swissvale Soup’s shipping operations, one should consider developing a dynamic model. One way to handle this is to develop a larger transportation problem framework. For example, suppose we allow inventory and backordering. Inventory costs \$2 per unit per week in Pittsburgh, and \$2.30 per unit per week in San Francisco; while backordering costs \$3.50 and \$4.10 in the supplies respectively. This will allow Swissvale Soup to smooth production of soup and technically meet demand over time. Suppose the soup demands over the next 4 weeks are
 Demand Week 1 Week 2 Week 3 Week 4 3 Dallas 150 160 250 260 4 Boston 140 140 100 160

Supplies remain fixed at 130 and 210 per week for Pittsburgh and San Francisco, respectively.

One extension of the model is relatively simple (see Aronson [1989] and Bowman [1956]). Repeat each supply in every time period for a total of 8. Repeat each demand as well. If the total supply does not equal the total demand, add a dummy supply or demand as needed. Then, a variable that represents shipping from a supply to a demand in the future indicates that a period of inventory is invoked for every period forward you go. For example, let i = 1, 2, 3, 4, ..., 8 represent the supplies, j = 9, 10, 11, 12, ..., 16. Supply 3 is actually supply 1 in period 2. Demand 14 is actually demand 4 (Boston) in period 3. So, two weeks worth of inventory must be charged (in Pittsburgh), and c3,14 = 15 + 2 + 2 = 19. When shipping from "now" to meet demand in the "past," backordering charges are likewise invoked.

Model this problem and solve it as you did in part a.

d. Advanced part - check some references and develop a transshipment model version of the part c situation. Model and solve it. (Hint, see Aronson [1989].

Part 2: Develop a Working Prototype of a Complete DSS

One of the major problems in DSS development is that the components do not necessarily glue together readily. Another major problem is that many optimization packages require specific types of variable names (x1, x2, etc.) while more meaningful descriptions are required for decision makers ("Crates of soup shipped from Pittsburgh to Boston", etc.). Though, we have not yet covered the details of the user interface component of DSS in detail (see Chapter 7), it is time to learn about developing a DSS based on two real-world DSS projects similar to one in which one author was involved. This project involves building a model-based DSS using generator tools to ease up on the efforts. The real difficult part will be in providing the cement among the three components: the DBMS, the MBMS and the user interface. The problem to be implemented is that of a transshipment problem (see Part 1) (if desired, the transshipment problem mentioned in Part 1d, or a pure linear programming model may be substituted).

Develop a PC-based DSS, in a PC database package, say Microsoft Access, to manage the input data to a network optimization system, independent of the database. Basically, the DBMS should handle all the data definitions, data storage and data manipulation, and data validation and verification (basically, the model structure and managerially meaningful names for the locations and shipping links). The DBMS also should handle the user interface. Then, determine how to interface the data from the DBMS with an optimization package (like QSB 2+, Lindo, or Excel Solver). Once the data are verified as OK, the DBMS will produce an input file for the optimizer (so you must determine the input file format required), run the optimizer (in DOS, you can pipe commands directly into software by putting them in a file and use the < symbol as in

QSB <optcmd.cmd - the DBMS may need to create this file as well), and retrieve data from its output file into another database. Windows-based systems are more seamlessly integrated. From this database of results, the DBMS must match up the variables from the optimizer with managerially meaningful terms and produce a screen and report that is usable. You may also want to define reports with a special report database.

DSS Operational Summary

• The DBMS provides a user interface.
• The DBMS stores the shipping data and allows the user to edit it, and verify its integrity.
• The DBMS produces a model input file and a command file that the optimizer can read.
• The DBMS invokes the optimizer that automatically solves the problem.
• The DBMS reads the output file from the optimizer into a result database.
• The DBMS synthesizes usable reports from the result database. These may be queried on line or printed.
The user interface should be relatively easy to use, and should have an option of running on automatic. Also, the system should be capable of using different sets of data and provide different defaults which can be stored in a default data table.

References for Major Group Term DSS Project 3

Aronson, J. E. (1989). "A Survey of Dynamic Network Flows." Annals of Operations Research, Vol. 20. pp. 1-66.

Bowman, E. H. (1956). "Production Scheduling by the Transportation Method of Linear Programming." Operations Research. Vol. 4. No. 2. pp. 100-103.

W5.14. Appendix W5-A: A Spreadsheet-based Economic Order Quantity Simulation Model

We show the development of an example of a simulation model in a spreadsheet. We use the well-known Economic Order Quantity (EOQ) model (see Turban and Meredith [1994] or Hillier and Lieberman [1995]). In the basic EOQ inventory model (see Chapter 1), we assume constant annual demand (continuous demand over time) (D = 1,200 boxes of widgets or whatever), a fixed ordering cost (K = \$5.00), a holding cost (H = \$1.20) (usually a percentage of the unit cost). All costs and the demand are constant (total certainty) and independent. Since this decision is made once, this is a static model. The decision variable Q represents the quantity ordered. The total inventory cost (TC) is the sum of annual ordering costs (TO) and inventory carrying costs (TH). (Note: on average, there are Q/2 boxes in stock - but you need enough space to store Q items.)

TC = TO + TH = KD / Q + H (Q / 2) = \$5(1200) / Q + \$1.20 (Q / 2)

= \$6000 / Q + \$.60 Q

To find the minimum of the total cost TC, we can take three different approaches.

1. We can plot values for TC as a function of Q and visually identify the minimum of TC (see Exercise 1).

2. We can invoke Solver in Excel, indicating that the cell containing TC is to be minimized while Q can vary. As this is an "unconstrained" optimization problem, there are no constraints (see Exercise 2).
3. We can apply a little calculus (just a little, or you can trust us) to derive the formula for the EOQ (or we can look it up in Turban and Meredith [1994] or Hillier and Lieberman [1995]). The EOQ is determined by
EOQ = SQRT( 2KD / H) = SQRT( 2(5)(1,200)/1.20 ) = 100 boxes

where SQRT means square root.

Because the annual demand is 1,200, an order is placed 1200/100 = 12 times per year or once per month.

The total inventory cost is:

TC = \$6000 / 100 + \$.60 (100) = 60 + \$60 = \$120

These formulas are implemented in the small, deterministic (under assumed certainty) DSS spreadsheet model shown in Figure W5-A.1.

figw5a_1.gif

FIGURE W5-A.1 Excel Spreadsheet Model of a Deterministic Economic Order Quantity Model.

The Simulation Model

Suppose now that the demand is considered under risk. Based on the EOQ computation above, we plan to order once per month, but demand follows a normal probability distribution with a mean of 100 boxes and a standard deviation of 10 boxes per month. Now we must develop a dynamic risk model to determine the best (or at least a very good) order quantity. In addition to the other costs, there is a cost of running out of boxes - there is a fixed stockout cost of \$10 (paid only once in a month and only if we stock out) and a per unit stockout cost of \$0.25 (because of the lost customer goodwill).

Clearly, if we adopt a policy of ordering the EOQ = 100, there is a 50 percent chance of a stockout in any month (why?). This is the case of no buffer stock. Thus, we should consider having some buffer stock by ordering more than 100 boxes. By ordering more, we then lessen the likelihood of a stockout, saving money, but we invoke additional inventory costs. We can evaluate this situation in the Excel-based simulation model, shown in Figure W5-A.2. The results are shown for 5 years, broken down by each of the 60 months for a buffer stock of 0.

figw5a_2

FIGURE W5-A.2 Excel Spreadsheet Simulation Model of an Economic Order Quantity Model.

The cell formulas and explanations are given in the table below for cells that do not clearly indicate what they are (simple numbers or labels).

 Cell Formula Explanation E52 =AVERAGE(C69:C128) Average Demand F52 =STDEV (C69:C128) Standard Deviation of Demand H52 =E53*12 Annualized Average Demand These formulas are repeated for Rows 53:60 G62 =SUM(S69:S128) Total Number of Stockouts B69 =RAND() Random Uniform 0-1 C69 =ROUND(NORMINV(B69,\$E\$42,\$E\$43),0) Demand: Rounded Random   Normal (100, 10)  (Demand must be integer) D69 100 Starting Inventory E69 =D69-C69 Starting Inventory minus Demand F69 =IF(E69>0,C69,\$E\$42) Monthly Sales (Cannot exceed  available supply) G69 =IF(E69<0,-E69,0) Monthly Stockout   (must be nonnegative) H69 =D69-F69 Inventory at End of Month I69 Amount Ordered To Bring the Initial Inventory   for Next Month to 100 J69 =H69+I69 Ending Inventory Plus Amount  Ordered =  Next Month’s Starting Inventory K69 =(D69+H69)/2 Average Inventory in Stock this   Month M69 =K69*\$E\$35/12 Average Monthly Inventory Cost N69 =\$E\$34 Order Cost = \$5.00 O69 Variable Stockout Cost = \$0.25*No. of Stockouts P69 =IF(G69>0,\$E\$37,0) Fixed Stockout Cost Q69 =SUM(M69:P69) Total Monthly Cost S69 =IF(G69>0,1,0) =1 if there is a Stockout, = 0 if not D70 =J69 Starting Inventory this Month =   Ending Inventory Last Month +   Order Quantity This Month Columns A-S for Row 69 are copied to Rows 70:99,  Except for Column D, where Cell D70 is copied to Rows 71:99

At the top of the worksheet are the standard EOQ calculations. The next section summarizes the results of the 60 month run: the average Total Monthly Cost is \$16.13 translating to a one-year expense of \$193.55. There were 28 stockouts (quite excessive) over the 60 months. The next section shows all the results for the 60 month run.

We performed a what-if analysis with the model by changing the decision variable, the order quantity in cell E47, to 110 and 120. We ran 10 runs each for Q = 100, Q = 110, and Q = 120. The results along with the average are summarized at the bottom. They are \$189.639, \$152.735, and \$158.497 for Q = 100, 110, and 120 respectively (we copied the value of cell H60 to these cells). You can see the effect of the buffer stock. As Q increases, the total cost decreases initially because of lower stockout costs, but as we approach Q = 120, this decrease is offset by the increase in inventory costs.

Stockouts can be dangerous if the model is describing a production process. They may stop production and create idle workers and equipment. For a store, or a firm delivering service to customers, it can be devastating; plus there is the lost goodwill of the customers.

Appendix W5-A References

Hillier F. S. and G. Lieberman. (1995). Introduction to Operations Research, 6th. ed. New York: Irwin.

Turban, E. and J. Meredith (1994). Fundamentals of Management Science. 6th ed. Homewood, IL: Irwin.

Appendix W5-A Exercises

1. Implement the deterministic EOQ model in Excel as shown in Figure W5-A.1.

a. Use the EOQ formula to determine the EOQ directly.

b. Plot the TC values as a function Q and visually identify the minimum of TC.

c. Invoke Solver in Excel (indicate that the cell containing TC is to be minimized while Q can vary. Recall that there are no constraints).

2. Implement the simulation model shown in Figure W5-A.2 and see how sensitive the total cost of the system is to Q starting from the EOQ. What is the best order quantity you found and why? (Hint: Plot the total cost as a function of Q.)

3. Do a simulation study evaluating the number of stockouts and stockout cost for the Q values used in Exercise 2. When is the stockout cost insignificant in terms of the decision to be made?

4. Try Q=100 in the simulation model. Did you stock out half of the months? Why do you expect to stock out, on average, in one-half of the months?

Appendix W5-A Group Exercise

Visit a local business that inventories something (such as a store, manufacturing company, auto repair shop, or fast food restaurant). Interview the owner or manager about the inventory policies (and models if any) that they use. Do they match the standard assumptions of the EOQ model? If not, what is different? Who developed their policy? Why is it being used? Is it accurate? Do they use the EOQ model to make estimates of the order quantities? If so, explain how. See whether you can estimate (with the owner or manager) the values of the parameters used in the models above. Try the parameters in the model and compare the results to the real system. How closely do they match? If they do not use the EOQ model, why not? Or could they? Do they have a demand forecasting method in operation (see Section W5.1 above)? Write up your results in a report.

W5.15. Appendix W5-B: The Analytic Hierarchy Process (AHP) Using Expert Choice

The Analytic Hierarchy Process (AHP) developed by Saaty [1990, 1995, 1996] helps a decision maker arrive at the best decision in a case of multiple conflicting objectives (criteria). The AHP makes it possible to deal with both tangible and intangible factors. With it, one organizes thought and intuition in a logical fashion using a hierarchy and enters judgments according to understanding and experience. This approach tolerates uncertainty and allows for revision so that individuals and groups can grapple with all their concerns. The AHP is implemented in the Expert Choice software package. (Windows-based versions of the software are available from Expert Choice Inc., Pittsburgh, PA, http://www.expertchoice.com . Demo versions along with a tutorial are available directly over the Web.)

The answers can be tested for sensitivity to changes in judgment. Problems are broken down into smaller constituent parts so the decision maker makes only simple pairwise (two at a time) comparison judgments throughout the hierarchy to arrive at overall priorities for the alternatives of action. The decision problem may involve social, political, and technical factors; several parties; and many objectives, criteria, and alternatives; and may require negotiation. The process can be used for applications such as:

• Predict likely outcomes
• Allocate resources
• Plan projected and desired futures
• Exercise control over changes in the decision-making system
• Evaluate employees and allocate wage increases
• Facilitate group decision-making processes
• Select alternatives
• Perform cost/benefit comparisons.
Example

Suppose you have been accepted by five MBA programs. (Hopefully, you would have many alternatives.) Which one would you attend? The selection process is based on several (multiple) criteria, ranging from quality to location. A simplified selection situation is summarized in Table W5-B.1. Below, we work this problem in Expert Choice.

TABLE W5-B.1 An MBA Program Selection Decision

 Criteria Alternatives Quality Location Type Size Payoff School 1 Medium MW General Large Medium School 2 High SE Technical Small High School 3 High West General Large Medium School 4 Extreme NE General Small Extreme School 5 Low NW General Large Low

Developing the Hierarchy

The first step is developing the hierarchy (an upside down tree that looks like a traditional organization chart), as shown in Figure W5-B.1. The goal is entered as the first node with a total weight of 1.000. This is followed by entering the main criteria nodes directly beneath the goal node. The total goal weight is initially divided equally among the criteria. (In the professional versions of the Expert Choice software, subcriteria may be added.) Simple editing commands are used to create and edit the nodes (from the Edit drop down menu -- usually just Edit Insert).

figw5b_1.gif

FIGURE W5-B.1 The AHP / Expert Choice Hierarchy for the MBA Program Selection Model 1.

Next, typically all the alternatives are entered beneath each criterion node. Normally, you just enter the alternatives beneath the first criterion node (move the cursor to the leftmost criterion node and select Edit Insert). Then, move the cursor up the hierarchy to the criterion containing the alternatives. Through the Edit drop down menu, select Edit Replicate to Peers to copy the alternatives to the other criteria.

Comparing the Importance of the Criteria Against the Goal

The next step in the process is to verbally compare the criteria, two at a time (say, quality and location). The user needs to specify which criterion is more important and by how much (moderately more important, strongly more important, etc.). Alternatively, a numerical comparison can be done (e.g., quality is 7 times more important than location -- see Table W5-B.2). Once quality is compared to location, type, size and payoff; the second criterion, location, is compared to type, size and payoff. Then the third criterion, type, is compared to size and payoff; and the fourth, size, is compared to payoff. All possible pairs of criteria are compared to ensure consistency of the decision maker (more on this later). To activate this mode, the user selects Compare Importance. Then, Skip the preliminary questions. If the second of the pair is more important, just hit the down arrow once to invert the comparison (the second becomes more important than the first). The verbal mode goes quickly once you get the hang of it. (The verbal mode will create the numeric matrix.) Note that when you only have two items to compare, Expert Choice switches to a graphical mode.

TABLE W5-B.2 Pairwise Criteria Comparison for the MBA Program Selection Decision Models (Numeric Mode). A ratio indicates that the value is inverted. The Inconsistency index is 0.032.

 Criteria Criteria Location Type Size Payoff Quality 7 4 3 2 Location 1/3 1/3 1/7 Type 1/2 1/5 Size 1/3

This pairwise comparison information is sufficient for the program to calculate the relative importance of each criterion (given in fractions, totaling 1.00; for example, quality = 0.411, location = 0.043, etc.). The program also calculates an inconsistency level; this level should typically be less than 0.10, or if you like more accuracy, then use 0.05. If the inconsistency index exceeds the threshold, you can adjust relative importance until a desired consistency is achieved. Typically, if the inconsistency ratio exceeds 0.20, then it indicates that you forgot to invert at least one pairwise comparison. When it is between 0.10 and 0.20, then it may be the result of an error, an error in judgement, or it may be OK. Some inconsistency is expected in human beings performing decision making (Note: consistency may not be a good characteristic of a good decision maker. Bad decision makers can be notoriously consistent, e.g., consider Ivan the Terrible.).

Comparing Alternatives for Each Criterion

The process just described is now repeated for all alternatives against each criterion. We start with "quality" and compare two alternatives at a time. Say we compare the quality of School 1 to that of School 2 (are they equal? or which one is of a higher quality?). Use the down arrow to switch between the first being more important than the second or back.

When comparing numerical values (for example, the payoff could be an estimated dollar value), you can use a numerical comparison instead of the verbal one. For example, the payoff of School 1 might be 10%, that of 2 might be 35%, and so on. The program can compute the relative percentage (or cost, profit, etc.) on a scale totaling one in a linear calculation. Alternatively, the numerical comparison can be done on a nonlinear scale. (We use the latter approach here. We consider the payoff in terms of low through extremely high.) In real-world situations involving hard numerical comparisons, it turns out that the verbal mode is usually as accurate as the numerical mode. The numerical mode is useful for debugging the model when the inconsistency index is too high (note again -- usually an inconsistency index greater than 0.2 usually indicates that a comparison should have been inverted). To toggle between verbal and numerical modes, while comparing, use ‘V’ and ‘N’.

We start this process by comparing the schools under quality, then location, until we complete payoff.

Overall Ranking. Using a "weighted average" approach, the overall ranking is obtained by multiplying the preferences in each criterion for each alternative by the relative importance of each criterion and totaling the weights for each alternative. The end product is a weighted average for each alternative. The alternatives then are ranked from the best to the worst. This is done by the Synthesis command. Select Synthesis and include details if you want them (do so the first couple of times you use the software).

An example of the overall ranking of the MBA program selection problem, executed with Expert Choice, is shown in Figure W5-B.2. Note that a graphical view is also provided. This is the result of the Synthesis of all the weights. School 4 is the clear winner with a total weight of 0.465. The inconsistency index is 0.05.

Figw5b_2.gif

FIGURE W5-B.2 The AHP / Expert Choice Overall Ranking for the MBA Program Selection Model 1 - Results.

Rating Utility - Model 2

For a large number of alternatives, the pairwise comparison of each two alternatives for all criteria can be very cumbersome (not to mention that more than 7 nodes will not fit on the screen). In this case, a utility approach (ratings) is applied. The ratings approach requires the definition of intensities (or measures) for each criterion. For example, "quality" can be extreme, high, medium, or low. "Size" can be small, medium, or large. So, the key difference between the first model described above and the ratings approach model is that beneath each criterion we do not have the alternatives, but a definition of possible ‘values’ (symbolic, not numeric) that each could take on. The relative importance of these intensities is then assessed, using a numerical comparison similar to that shown in Table W5-B.1. An example of such a hierarchy is shown in Figure W5-B.3. The criteria are the same, and the criteria comparisons yield the same results at this level as those of the first model. In the next level, possible values of each criterion appear (for example, "size" may be large, medium or small). Once the criteria and their symbolic values are entered, pairwise comparisons are performed as described above. Then, in the Comparison drop down menu, select Ratings. A spreadsheet structure will appear. In it, you enter the alternatives (one per row) followed by the symbolic values that each criterion (in the columns) can have. This table essentially looks like the original model data shown in Table W5-B.1. Once the data are all entered (once you list the alternatives, select the criterion values off of the list at the top of the screen), the weighted average of each alternative is calculated and presented on screen. For this example, the Schools 1 through 5 rated 0.166, 0.546, 0.263, 0.901, and 0.105 respectively. No pairwise comparisons yielded an inconsistency index above 0.071.

figw5b_3.gif

FIGURE W5-B.3 The AHP / Expert Choice Hierarchy for the MBA Program Selection Model 2: Utility Ratings.

Sensitivity Analysis

Expert Choice enables the user to see how the final priorities will change if the user changes the relative importance of the criteria. This option is supported by graphics.

Caution: A few words of caution are in order. If you have two (or more) very similar alternatives, you might want to use only one in the model. Then, once you have finished your evaluation, if it wins, create a new model to compare the two similar items. Otherwise, if you include both, they may split the overall weight, so that a normally second place item may win. Also, if you do not get a clear winner, that is the top two choices are close (within .05, say), then, you might want to drop some of the lesser preferred choices; or you may add a few criteria to help break the tie. This rank reversal problem has fueled much debate and controversy over the effectiveness, accuracy, and viability of the AHP. Check it out in the literature!

Overall, though, using the methodology and software generally assist decision makers by allowing him or her to apply managerial judgments directly in the model. It helps decompose the model and helps the decision maker to focus better, and make an expert choice!

Software Availability

A demo version along with a tutorial of Expert Choice are available directly from Expert Choice Inc., Pittsburgh, PA, directly over the Web (http://www.expertchoice.com).

References for Appendix W5-B

Saaty, T. L. (1990). Decisions for Leaders: The Analytical Hierarchy Process. Pittsburgh, PA: University of Pittsburgh Press.

Saaty, T. L. (1995). Decision Making for Leaders: The Analytic Hierarchy Process for Decisions in a Complex World.. Revised. Pittsburgh, PA: RWS Publishers.

Saaty, T. L. (1996). Decision Making for Leaders. Vol. II. Pittsburgh, PA: RWS Publishers.

Back to the Top

W5.16. Appendix W5-C: Visual IFPS/PLUS

(A small amount of this material is condensed from Gray, P. (1996). Visual IFPS / PLUS for Business. Upper Saddle River, NJ: Prentice Hall, Inc.).

Introduction

Visual IFPS/PLUS (from Comshare Inc., Ann Arbor, MI, http://www.comshare.com) is a complete modeling language. Visual IFPS/PLUS (IFPS for short) is a financial planning language - a fourth generation programming language (4GL). IFPS (the Interactive Financial Planning Language) was originally designed to be a programming language for managers and executives. It uses a more natural syntax than traditional, third generation programming languages use, and handles much of the work that programmers must do to get the computer to do anything practical. Estimates of the efficiency improvements in terms of the amount of work done by an IFPS model (also called a program) range from a factor of 10 to 100 times over traditional languages. For example summing the digits from 1 to 10 can be done in the following two line IFPS model:

COLUMNS 1..10

TOTSUM = TOTSUM + COLNUM

An IFPS model is organized like a spreadsheet. The rows represent variables defined using algebraic style expressions, and the columns contain values that are determined by solving the model.

Our intent of this appendix is to explain briefly the main features of Visual IFPS/PLUS and the basics of how to use it. Refer to Gray [1996] for details about installation and for subtler details about the software and its use. For commercial use, contact Comshare Inc. directly at http://www.comshare.com/ .

Features of IFPS

Although most business students today already know how to use spreadsheets, there are certain features that make IFPS much more effective, once the user gets past the initial part of the learning curve. One should remember that the first release of IFPS preceded the development of the computer spreadsheet by nearly a decade.

We show a simple IFPS model (MOD5A1) in Figure W5-C.1 and its solution in Figure W5-C.2. This example aptly illustrates the many built in features of IFPS. IFPS derives its versatility and strength from these built-in features, which include the following:

figw5c_1.gif

FIGURE W5-C.1 Visual IFPS/Plus Model MOD5A1

figw5c_2.gif

FIGURE W5-C.2 Visual IFPS/Plus Solution to Model MOD5A1

Natural Language. The language statement structure is VARIABLE = EXPRESSION. For example, ANNUAL REVENUE = UNIT PRICE * SALES, is a valid model line. This natural format makes the relationships among variables clear. Furthermore, it leads to very compact models. Variable names are in English (or whatever language you speak), may include multiple words and effectively have no length limits, so models are self-documenting. Variable names are NOT case sensitive. If comments are needed, a backslash (\) precedes a comment.

The first statement of every IFPS model is the COLUMNS statement. It is the only data statement and defines the number of columns to be used. The number of rows is determined by how many lines are in the model (except for a few special cases). The natural model structure involves each model line repeating its last expression (separated by commas) until the last column is reached (e.g., INVESTMENT = 1000, 0 means that the first column has 1000, but columns 2 through the last one are 0).

No Order Restriction. You can organize the lines (variables) of a model in any order that fits the way you view the problem. For example, in model MOD5A1, we use the UNIT PRICE and SALES in defining ANNUAL REVENUE in the fifth line of the model, but the UNIT PRICE and SALES are not defined until the Model Parameter section at the bottom. When you direct IFPS to solve a model, it finds what it needs, when it needs it. (No Muss! No Fuss!)

Spreadsheet Solution. The results of an IFPS model, when solved, appear in tabular form as shown in Figure W5-C.2.

What if. Once a model is solved, scenarios may be tested and saved or incorporated directly into the model. The model can easily be reset to its original structure.

Goal Seeking. Two forms of goal seeking are available: line-to-line or entire expressions (over all columns) or point-to-point. In the former case, you specify an entire new expression for the Goal Variable and select All columns to change; and for the Adjust Variable, select All columns to adjust. In the latter case, you specify a specific column for the Goal Variable and a single value expression; then the Adjust Variable and its specific column to adjust.

Built-in Functions. IFPS has many financial, statistical, and mathematical functions including those of net present value, internal rate of return, amortization, and a number of depreciation options. It also includes forecasting functions such as moving averages, trend extrapolation, and regression. Standard mathematical functions such as absolute value, maximum, minimum, etc. are available. There is also a logical IF THEN ELSE statement for setting values of variables (e.g., SALES = IF UNIT PRICE .GT. 100 THEN 50 ELSE 75).

Reserved Words. There are a number of built in reserved words that refer to specific columns in variable expressions. These include PREVIOUS, FUTURE, FOR and [ ]. They are used as follows:

SALES = 100, PREVIOUS * 1.04

indicates that sales in column 1 are 100, and increase by 4% per year thereafter (when no variable is specified, it is assumed to be the same as the variable in the row).

SALES = 100, 120, PREVIOUS 2 + 25

indicates that the SALES in columns three and thereafter are 25 more than the SALES two columns back in time.

ESTIMATED TAX PAYMENT = .2 * FUTURE PROFIT

looks ahead one column in the PROFIT row to determine the ESTIMATED TAX PAYMENT

SALES = 4200 FOR 6, PREVIOUS * 1.08

keeps the SALES at 4200 for 6.

TAX RATE = FINAL TAX RATE[10]

sets the TAX RATE in every column to the FINAL TAX RATE in column 10.

Other common reserved works include COLNUM (a variable that always equals the column number), THRU (for specifying variables or columns), and all the command, functions and subroutine names.

Other. Other IFPS features include model consolidation, reports and data files, the profile, and command files. Model consolidation is easily handled by the command structure. Meaningfully formatted reports can be created through predefined report files. IFPS models may be linked to data from a database directly through data files. The profile is used to customize the operation of IFPS for an individual user. Command files are programs of IFPS commands that can be used to automate its operations for repetitive tasks.

A Simple Visual IFPS/Plus Model

Refer to the model shown in Figure W5-C.1. This is a six-year investment model.

Line 1: COLUMNS 1..6

Specifies that the model has 6 columns, numbered 1 through 6. (There can be variations on this. You can number columns QTR1, QTR2, etc.; or A, B, C, D; or 2001..2006, etc.)

Line 2: \ Sample Investment Model 5A1

This is a comment.

Line 3: INVESTMENT = 1000, 0

Sets the variable INVESTMENT to 1000 in year 1 and 0 in years 2 through 6.

Line 4: SALES = 100, PREVIOUS * 1.08

SALES are set to 100 in year 1, then increase by 8% per year.

Line 5: ANNUAL REVENUE = UNIT PRICE * SALES

The ANNUAL REVENUE in every column is equal to UNIT PRICE times SALES.

Note that the UNIT PRICE is not specified until line 12.

Line 6: ANNUAL COST = UNIT COST * SALES

The ANNUAL COST in every column is equal to UNIT COST times SALES.

Again, note that a variable has not been defined yet: UNIT COST.

Line 7: ANNUAL NET PROFIT = ANNUAL REVENUE - ANNUAL COST

This is a simple algebraic expression to define the ANNUAL NET PROFIT.

Line 8: NET PRESENT VALUE = NPVC(ANNUAL NET PROFIT, DISCOUNT RATE, INVESTMENT)

The NET PRESENT VALUE is set to a function that relates three other variables. NPVC performs a cumulative net present value calculation. In column k, it sums the ANNUAL NET PROFIT, assumed at the end of each year, for years 1 through k, taken back in time using the DISCOUNT RATE. It subtracts from that result, the sum of the INVESTMENT over columns 1 through k, but assuming that these streams of cash are at the start of the year. Note that financial functions assume that columns represent years. The PERIODS statement indicates how many periods are in a year (12 for months, etc.).

\
\ MODEL PARAMETERS
\

Lines 12-14:

Line 12: UNIT PRICE = 10
Line 13: UNIT COST = 7
Line 14: DISCOUNT RATE = 8%

These lines simply set these parameters to their defined values. Note that the discount rate can be specified as 8% or .08.

TIP: If goal seeking is to be done, where the discount rate is expected to repeat in every column, line 14 should be replaced with:

DISCOUNT RATE = 8%, PREVIOUS

How to Create and Solve a Visual IFPS/Plus Model

Visual IFPS/Plus has many features and capabilities that are impossible to cover in depth here. Our intent is to explain how to start the system, create and solve a model, and print results.

System Operation. Visual IFPS/Plus must be installed on your PC system. It uses a client / server operation mode. Some of the work is done on the client - on your PC’s desktop, while part of the work (file management and solution of certain advanced models) is done on the server. The student version on the PC uses the same PC as the client and the server, so, the screen will toggle between each as is required. The Visual IFPS/Plus graphical user interface will be intuitive to users of Microsoft Windows 3.1 or higher. For example, the use of buttons, drop down menus and rudimentary editing are from the Windows user interface. If you are not familiar with Windows, you should learn it before using Visual IFPS/Plus (generally good advice before using Windows-based software!).

Visual IFPS/Plus System Requirements. The basic system requirements are a personal computer that is 100% IBM / PC compatible with an Intel 386 processor chip or higher (a math coprocessor is recommended for the 386 chip), a hard disk with at least 10 megabytes of free space, a 3.5" floppy drive, 8 Mb of RAM (16 Mb is recommended), and a VGA monitor. Visual IFPS/Plus runs under Windows 3.1 or Windows NT. Note, later releases may require different minimum configurations.

Getting Started. Activate Windows and double click on the IFPS Program Group on the desktop. Double click on the Visual IFPS/Plus icon in the folder. (The IFPS/Plus icon will activate the IFPS server, which is a nonvisual version of IFPS/Plus. It requires that commands be typed directly.) When you see the Copyright / Restricted Rights Notice screen, click on the IFPS+ button. You may get a message about a Model and Reports Filename. If so, click on OK. In the Visual IFPS/Plus window, you will see a menu bar with the following pull down menus:

File View Tools Server Help ;

and a button bar with four buttons:

New Open Context Server.

More buttons will appear once a model is active.

Model and Reports File. IFPS stores all of its models in umbrella files called a Model and Reports Files (M/R). These are ‘megafiles’ that contain collections of related models and other files. Click on the Context button (which opens a Set Entity Context Window) and select an M/R file from the list, or create one at this time. The open M/R File is indicated in the Status Bar at the bottom of the Visual IFPS/Plus window.

Creating a New Model. Next, to create a new file, click on the New button. You will see a window open on the screen. Using the IFPS rudimentary visual editing commands, type in your model line by line (type in the model in Figure W5-C.1). (Note, to continue a variable definition statement to the next line, use a ‘ at the end of the first line.) When you are done, select File, Save As (use Save if you already have saved it) as and give it a name. It will actually be stored in the Model and Reports File.

Opening an Existing Model. If your model already exists, click on Open and select your model from the list.

Menus and Commands. Once an M/R File is active and a model is created or opened, the available menus have expanded to include:

File Edit View Format Tools Options Analysis Window Server Help .

The buttons now include

New Open Templates Solve Using What If Goal Seek Report Server .

Solving the Model. To solve the model, click on the Solve button. IFPS will ask you which columns and variables you want. Select OK for All. If the model has an error, IFPS will report it next, and give you a hint as to how to fix it. Otherwise, it will solve the model and produce a tabular output screen as shown in Figure W5-C.2. When does this investment break even?

What If. What if analysis can be performed by closing the solution window and, with the model still active, clicking on the What If button. A Case window will open. In this Case window, you type in new model lines. Then, select Solve to solve the case. The solution appears quickly in a What If solution window. You may save the Case for later use (with the Case window open, select File, Save as), or replace the existing model lines with those specified in the Case window. What would happen if the investment required an additional infusion of capital - a payment of \$500 at the start of year 5? The new What If statement would be INVESTMENT = 1000, 0 FOR 3, 500, 0 . You can save this (or any) What If Case or Append it to the model (which eliminates the original model lines for the variables in the Case).

Goal Seeking. As mentioned above, there are two kinds of goal seeking, but both work the same way. Either with an open solution window to the model, or just the model window open, click on the Goal Seek button. You will see a Goal Seek window containing a tablet with three columns labeled ‘Goal Variable’, ‘Goal Expression’, ‘Adjust Variable’, with boxes for selecting the Column for each variable. Before attempting to Goal Seek with model MOD5A1, change the DISCOUNT RATE = 8% line to DISCOUNT RATE = 8%, PREVIOUS and Save as MOD5A2. To perform a line-to-line Goal Seek, to determine the DISCOUNT RATE that yields a NET PRESENT VALUE initially at -800 and rising by 300 per year, select NET PRESENT VALUE for the ‘Goal Variable’ (use the arrow button to find it), select All columns, and for the expression use = -800, PREVIOUS + 300. For the ‘Adjust Variable’, select DISCOUNT RATE. You must select All columns. Then click on the Solve button. The results appear in a Goal Seek solution window.

For a point-to-point Goal Seek, say we want to know the internal rate of return (actually, we could have used the IRR function). The IRR is the discount rate that yields a zero net present value. We could use trial and error via What If to find it, or, simply activate Goal Seek, select NET PRESENT VALUE for the ‘Goal Variable’ (use the arrow button to find it), select column 6, and for the expression use ‘= 0. For the ‘Adjust Variable’, select DISCOUNT RATE. You must select a single column - for this example, column 1. Then click on the Solve button.

Printing: From any screen, selecting File Print will print the information (model, solution, etc.) of the current screen. Tip: to capture an entire screen in Windows, use the ‘Alt’ + ‘Print Screen’ keys. This copies the screen into the Windows clipboard. You can then close Visual IFPS/Plus, open a word processor and past the screen image into a file.

Refer to Gray [1996] for details about installation and for subtler details about the software and its use. For commercial use, contact Comshare Inc. directly at http://www.comshare.com/.

W5.17. Appendix W5-D: Visual IFPS/Plus and Spreadsheet Simulation Models

(A small amount of this material is condensed from Gray, P. (1996). Visual IFPS / PLUS for Business. Upper Saddle River, NJ: Prentice Hall, Inc.).

Introduction

Visual IFPS/Plus supports Monte Carlo simulation. You create the model in the same way that you create a deterministic model described above. There are several random number distributions in IFPS to specify stochastic (random) variables like demand, price, etc. IFPS Monte Carlo simulation only allows for equal time interval models (ideal for cash flow models), and must run on the server (but, a Monte Carlo model may be solved deterministically on the client for debugging). You are basically running the mainframe version of IFPS/Plus after you create the model. Recall though that the Student Version will have both the client and server running on the PC.

Here, we provide a simple example, and the basic steps one follows to create and run a Monte Carlo simulation model in Visual IFPS/Plus. We also provide a comparable simulation implementation in Excel (alternatively, Lotus 1-2-3 may be used). If you haven’t done so, you should read the previous appendix and implement a couple of Visual IFPS/Plus models.

A Visual IFPS/Plus Monte Carlo Model

In Figure W5-D.1, we show model MONTE1. It is an investment model over 10 years, where the SALES are randomly distributed in each time period. The SALES follows a normal (Gaussian) distribution with a mean of AVERAGE SALES (starts at 100 in year one and increases by 4% per year) and a standard deviation of AVERAGE SALES / 10 ). This is implemented as the SALES model line:

SALES = NORRANDR(AVERAGE SALES, AVERAGE SALES / 10)

The normal random number NORRANDR(mean, standard deviation) is used. To repeat the same value in every column, use NORRAND(mean, standard deviation). (e.g., drop the final R).

Figw5d_1.gif

FIGURE W5-D.1 Visual IFPS/Plus Model MONTE1

This model is created by the same procedure described in Appendix W5-C. When model MONTE1 is solved directly on the client (or even on the server with the SOLVE command) as a deterministic model, the average values for the random numbers are used. Its deterministic solution is shown in Figure W5-D.2.

Figw5d_2.gif

FIGURE W5-D.2 Deterministic Solution to Visual IFPS/Plus Model MONTE1. This run was performed on the Client in Windows.

Built in Probability Distributions of IFPS/Plus

VISUAL IFPS/Plus has the following built in random number generators:

 UNIRAND(start, end) Uniform Distribution TRIRAND(lower, most probable, upper) Triangular Distribution T1090RAND(lower, most probable, upper) Alternate Form of the Triangular Distribution - uses 10th and 90th percentiles NORRAND(mean, standard deviation) Normal (Gaussian) Distribution GENRAND( List of X, Y points ) Specify Your Own Via an Approximation CUMRAND( List of 10 percentile points) Specify Your Own Via an Approximation

These distribution functions all repeat the number generated in the first column over all columns. To generate a different random number in every column, add an R to the end of the distribution function’s name (e.g., UNIRANDR, TRIRANDR, etc.).

Monte Carlo Solution of the Model

At this point, we assume model MONTE1 has been typed in and saved. It has been debugged by solving it on the client (Visual mode). To solve model MONTE1 as a stochastic model, click on the Server button. You will see the Command Session window. (Note - variable names are now case sensitive.) Type the following:

MODEL MONTE1

to activate the model

LOGOUTPUT MONTE1DO.LOG

to create a log file called MONTE1DO.LOG of the activities of the session.

If you want to see the model, type

LIST

(Caution: don’t try to edit the model on the server.)

You can get a list of commands by typing

HELP TOPICS

If you want to solve the model as you did in the Visual mode (deterministically using the average values), you type

SOLVE

ALL

to see the solution of all columns. The information will scroll on the screen like a mainframe terminal session in the window.

There are two ways to solve an IFPS/Plus Monte Carlo model:

1. Deterministic Scenarios. Random variables can be set to their HIGH, LOW, MEAN or MEDIAN values. This way the best and worst conditions may be tested to help validate the model. For example, HIGH UNIT COST and LOW UNIT PRICE and HIGH SALES (if we are losing money on each sale) might be a worst case situation, while LOW UNIT COST, HIGH UNIT PRICE and HIGH SALES might be a best case solution (if we are making money on each sale). Note that just solving a model means that the mean values are used. Also, for normally distributed random variables, the mean +/- 2.5 standard deviations is considered to be a HIGH and LOW value.

For model MONTE1, we are interested in the behavior of the NET PRESENT VALUE relative to the values of the SALES. An example of HIGH SALES is shown in Figure W5-D.3. After the model was activated and solved as described above (following the SOLVE and ALL commands), the scenario was created by typing

HIGH SALES
ALL

in response to the ENTER SOLVE OPTIONS prompt. If you get stuck, type SOLVE first.

Figw5d_3.gif

FIGURE W5-D.3 Scenario Test of Visual IFPS/Plus Monte Carlo Model MONTE1. Here HIGH SALES was used. This run was performed on the server. User commands are indicated by ‘<<‘.

2. Stochastically. We perform several hundred runs of the simulation model with random numbers and collect statistics on specific variables in specific columns. Here, we are interested in the statistical behavior of the NET PRESENT VALUE in column 10.

When prompted to ENTER SOLVE OPTIONS, type

<< Table >>

MONTE CARLO 100 to perform 100 runs

COLUMNS 10 to examine variables in column 10
NORM NET PRESENT VALUE to get a Normal Approximation Table

by 10 percentile point intervals

FREQ NET PRESENT VALUE to get 90th, 80th, ..., 10th percentile points
HIST NET PRESENT VALUE to get a histogram
NONE

The statistical results for the NET PRESENT VALUE in column 10 of the 100 runs are shown in Figure W5-D.4. The values in the Frequency Table are compared to the values in the Normal Approximation Table to see how ‘normal’ the results were (and they look pretty ‘normal’, but seem to be weighted a bit heavily to the left). Standard statistics for the runs are next reported. They include the Mean, standard deviation, skewness, kurtosis and the 10% and 90% confidence interval for the mean. Finally a histogram is shown.

Figw5d_4.gif

FIGURE W5-D.4 Monte Carlo Simulation Run of Visual IFPS/Plus Monte Carlo Model MONTE1. Here 100 runs were done. This run was performed on the server

If we want to pool an additional 100 runs with our results thus far, we would type

POOL 100
NONE

When completed, we select File Close from the menu bar at the top. We can now exit Visual IFPS/Plus and incorporate our Log file, MONTE1DO.LOG into a word processing document, or simply print it directly. (Click Here to see the Log File)

One important feature about simulation in IFPS/Plus is the compactness of the model. This is best illustrated by examining how one can create a simulation model directly in a spreadsheet. In this subsection, we show the implementation of a spreadsheet simulation model comparable to the IFPS/Plus model MONTE1. The spreadsheet simulation model is shown in Figure W5-D.5.

Figw5d_5.gif

FIGURE W5-D.5 Excel Spreadsheet Simulation Model Comparable to Visual IFPS/Plus Model MONTE1

The main advantage to such an implementation is that most managers already understand how to use a spreadsheet, so that such a model would be more palatable. Note that each cell must be specified, and special macro programming must be done to capture the results of several runs. Results for each year and the statistics must be explicitly coded into the cells. Documentation of the model is performed by explicit comments, separate from cell entries. Graphs of the results must be explicitly defined as well. Of course, some of this can be automated by using specialized packages like @Risk (Palisade Corp. Newfield, NY, www.palisade.com).

W5.18. Appendix W5-E: Waiting Line Modeling (Queueing) in a Spreadsheet

Introduction to Waiting Lines

Waiting lines describe one of the most common decision making situations with which many people are familiar. They occur everywhere - theme parks, airports, movie theaters, banks, supermarkets, etc. This model class also describes many industrial situations, such as telephone / communication system design, forklift scheduling and textile spool replacements. Any time customers ‘line up’ (‘queue up’ in England) for service, there is a queue. Essentially, whenever there are fewer servers (tellers in a bank, forklifts, etc.) than customers (people, items to be moved, etc.), there is a waiting line.

In basic waiting line theory, the arrival rate of customers (or alternatively the interarrival times) and the service times are considered to be stochastic (random), with certain, well-known probability distributions (Poisson, exponential respectively) that accurately describe them. Here, we focus on the ‘standard’ queuing model, which might be found in a bank or movie theater (we ignore extensions that include balking, waiting line size limits, etc.). In our example, the Main Street Bank (MSB), customers arrive according to a Poisson process (e.g., characterized by a Poisson probability distribution) with a mean arrival rate of l = 15 customers per hour. The service time for an individual server is assumed to be exponentially distributed. Its inverse has a mean of m = 20 customers per hour. There is a single waiting line with one server. Fortunately, the assumptions are not too restrictive and cover a wide variety of situations. Further, there are relatively easy to state the formulas that describe a simple waiting line system’s performance. Hillier and Lieberman [1995] and Turban and Meredith [1994] for details on the theoretical models and other examples of waiting lines.

Performance Measures for Waiting Line Models

The following formulas describe a set of standard performance measures (potential result variables) of the single waiting line system:

L = expected line length (number of customers) in the system, including those being

serviced,

Lq = expected number of people waiting for service (in line only),

W = expected total time a customer is in the system (including service time),

Wq = expected waiting time of a customer (excluding service time).

P(Wq<t) = probability that the waiting time in the system (excluding service) exceeds t,

r = effectiveness of the system (a utilization parameter used in calculations),

s = number of servers.

For a single server, these performance measures are determined by the formulas:

r = l / m
W = 1 / (m -l )
L = l / (m -l )
Wq = l / (m (m -l ))
Lq = l 2 / (m (m -l ))
P(Wq<t) = r e-m (1-r ) t

(Note, e = 2.7182818 is the natural logarithm.)

Using the performance measures, we can evaluate service levels for the waiting line system. We next show a simple spreadsheet model to describe the complicated nature of waiting lines (queues). We then extend it into a decision making spreadsheet model.

Excel Spreadsheet Waiting Line Model Implementation

Situation 1

In Figure W5-E.1, we show an Excel spreadsheet model that evaluates the performance measures for a single server waiting line (s = 1) for the Main Street Bank (a very small bank), in which an analyst has spent several days observing the real system to determine the following parameters: the mean arrival rate is l = 15 customers per hour, the mean service rate is m = 20 customers per hour. The parameters of the spreadsheet waiting line model are clearly defined at the top, while the results are shown at the bottom. The text boxes on the spreadsheet describe the details of the waiting line performance measure formulas (the numbers and labels are clearly shown in the rest of the cells). (Note that in Lotus 1-2-3, formulas are preceded by the ‘@’ symbol rather than by the equals sign (=) as in Excel.)

Figw5e_1.gif

FIGURE W5-E.1 Excel Spreadsheet Model of a Single Server Waiting Line Standard Performance Measures.

Situation 2

Susan Sanders, the bank manager, is very sensitive to customers waiting in line too long because the bank’s largest depositor complained about having to wait in line for 45 minutes yesterday. We can determine the probability that a customer waited more than 45 minutes (.75 hour) in the section of the spreadsheet close to the bottom where the P(Wq>t) (for t=.75 hour) calculation is performed. Note that the performance is not too bad (0.0176). But, there is a finite probability that a customer may experience a very long wait (say 1.5 hours at 0.000415), so, these expected times should be considered carefully in managing a waiting line.

Situation 3

Main Street Bank can lease some new computer hardware for \$20 per hour to speed up the teller’s performance to 30 customers per hour. We can determine the value of this new hardware by evaluating the cost of the system. Clearly the hourly cost of tellers is known (the salary), but the customers’ waiting costs must be factored into the evaluation. There is a difficulty, however, in establishing the customer waiting cost. We recommend that demographic data (check census data for your local area or MSA -- see the Exercises and Internet Exercises in Chapter 4) be used to determine how much a customer’s time is worth, with an added quantity to compensate the customer for lost goodwill due to waiting. In our example, a teller earns \$17 per hour (including fringe benefits), and a customer’s waiting cost is \$100 per hour. To determine the customers’ waiting cost, we multiply by the customer hourly waiting cost by the expected number of customers waiting in line = \$100 Lq . The total cost is then TC = \$37 s + \$100 Lq . This case is easily solved and the total cost of the new system is compared to the old one in Figure W5-E.2. The new hardware should definitely be leased, but, if it were to cost more than \$ 175 per hour, it should not.

Interesting extensions to the basic waiting line model are called for in the Group Exercises below.

figw5e_2.gif

FIGURE W5-E.2 Excel Spreadsheet Model of a Single Server Waiting Line Cost Computations. Total Cost for the Bank.

Appendix W5-E References

Hillier F. S. and G. Lieberman. (1995). Introduction to Operations Research, 6th. ed. New York: Irwin.

Turban, E. and J. Meredith (1994). Fundamentals of Management Science. 6th ed. Homewood, IL: Irwin.

Appendix W5-E Exercises

1. Create the spreadsheet waiting line models. Why is \$175 per hour the break-even point for deciding whether or not to lease the new equipment?
2. What happens to the bank’s performance (especially the probability of waiting more than 6 minutes in line for service) if the manager installs an Automatic Teller Machine outside the bank which cuts the arrival rate inside the bank to 10 customers per hour? Is there a significant difference in the probability that a customer would wait more than 45 minutes? Would the bank be servicing more customers? Consider the issue of attracting customers who generally do their banking when the bank is closed.
3. Suppose there is no Automatic Teller Machine. What if the manager were to hire a faster teller who can service an average of 25 customers per hour? How do the performance measures change? What if this new teller were to earn \$22 per hour. Should the manager replace the original teller with the faster one?

Appendix W5-E Discussion Questions

During rush hour, a manager of a fast food restaurant may attempt to minimize the total system cost of a waiting line. But, the number of servers may not be sufficient for the system to remain stable. For example, if customers arrive on average one per minute, but the average service time is 2 minutes, after one hour, there will be (on average) 30 people waiting in line. The utilization parameter r = l / sm > 1, indicates that in the long run, more customers arrive than can be served.

1. In this case, what performance measures should a manager examine, and how can this seemingly unstable situation be managed?
2. Clearly, this situation occurs in reality (check out a fast food restaurant at lunch time, or examine rush hour traffic). What is really happening?
3. How do managers really handle this situation?

Appendix W5-E Group Exercises

1. Refer to one of the two references on waiting line models. In this exercise, the decision variable is the number of tellers, s.

a. Develop a spreadsheet model that accommodates the basic model for a single line with more than one server (s > 1) (Caution - this is a much more difficult task than the creation of the original model). For the basic case in Situation 1, evaluate the performance measures for values of s = 1, 2, 3, ... , 10. Plot the results. When does the decrease in the expected customer waiting time and the expected number of customers waiting become insignificant? How small is the improvement? What happens if you use multiple lines (one per server)? Which is better, a single line or multiple lines?

b. Suppose Susan has decided that an average waiting time (before service) in excess of 6 minutes (0.1 hours) is excessive. She wants to reach a desired service level for the bank. Knowing that as s increases, the performance measures will decrease, how many tellers (what value of s) yields an average waiting time of W q £ 0.1 ? On average, what is the probability of waiting more than 0.1 hours? Is this reasonable? How many tellers would be necessary to get this probability down to 0.15 ?

c. The nature of waiting lines is such that one cannot absolutely minimize any of the standard performance measures as a function of s. As the decision maker increases the number of servers, the quantities all decrease. When s is very large, an increase in s decreases the measures a very tiny amount. In fact, the servers will start to interact and their efficiency will drop - thus being very counterproductive (or there may not even be room in the facility for all the servers - just imagine a bank with 5000 tellers). So, it would be prudent to consider another performance measure, that of the cost of the system. The total cost of the system is the tellers’ salaries plus the customer waiting cost. Suppose a teller earns \$17 per hour and the customer waiting cost is \$100 per hour (just consider the waiting time, not the service time). Then, the hourly system cost is \$17 s + \$100 Lq . Evaluate the teller cost, the customer waiting cost and the total system cost for s =1, 2, 3, ... , 10. Plot the total cost versus s. How many tellers (s) yields a minimum total system cost?

d. What If Analysis: How sensitive is the decision in part c to the customers’ hourly waiting cost (i.e., if it increases, when does s; similarly, if it decreases, at what value does s change and in which direction)? Over what range is s = 1 optimal, s = 2 optimal, and so on up to s = 5?

e. Now, consider the dynamic nature of the problem. Use the spreadsheet model developed in part c to determine how many tellers are needed over the course of an 8 hour day, where the mean customer arrival rates are

 Time of Day Mean Customer  Interarrival Rate 9:00 - 10:00 am 15 10:00 - 11:00 am 20 11:00 - 12:00 noon 30 12:00 - 1:00 pm 35 1:00 pm - 2:00 pm 22 2:00 - 3:00 pm 25

f. For the models developed in parts a and c, what is the effect of using multiple lines, one per teller for s = 1, 2, 3, ..., 10?

2. Find a real business that has waiting lines (fast food restaurant, bank, train station, post office, airport, university registration) and examine their waiting lines (get permission from the business owner or manager first). Describe the waiting line situation. Do they use one line or many? Do they have specialty servers, and if so, how many and what kinds? Do they control the arrivals in any way (e.g., airlines control their flights to some degree, and university registration often involves specific time slots for each student)? For a relatively busy hour, observe the operations and record the time of arrival, time that service begins and time that service ends for each customer. Are both the interarrival times and service times exponentially distributed (do customers arrive and leave according to Poisson processes)? Plot a histogram of the interarrival times and service times. Do they ‘look’ exponential? If not, explain why not. Using the means of the interarrival and service rates, make an estimate of the servers’ hourly wages and the customer waiting time to determine (in a spreadsheet model) the optimal number of servers. Is this sensitive to changes in the customer waiting cost?

ADA Decision Systems (1995). DPL: Decision Analysis Software for Microsoft Windows. Belmont, CA: Duxbury.

Aggarwal, A. K. (1990). "Simulation as a DSS Modeling Technique." Information Management. Vol. 19. No. 5. 295-305.

Ainscough, T. L. and J. E. Aronson. (1996). "Neural Networks in Business." Journal of Business and Entrepreneurship. forthcoming.

Archetti, F., E. Fagiuoli, P. Confalonieri and F. Zannetti. (1992). "A Knowledge-based Diagnostic System for the Business-Process Ordering of a Plant." Expert Systems Applications. Vol. 4. No. 3. 315-322.

Ariav, A. and M. J. Ginzberg (1985, October). "DSS Design: A Systematic View of Decision Support." Communications of the ACM.

Asoroso, D. (1994). Decision Making Using Lotus 1-2-3 for Windows. New York: McGraw-Hill.

Baird, B. F. (1989). Managerial Decision Making Under Uncertainty: An Introduction to the Analysis of Decision Making. New York: Wiley.

Blanning, R. W. (1992, December/January). "Directions in Model Management Research." Decision Line.

Barnett, W., A. Presley, M. Johnson and D. H. Liles. (1994, October). "Architecture for the Virtual Enterprise." Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Part 1. San Antonio, TX.

Bohoris, G. A. and J. M. Thomas. (1995). "A Heuristic for Vehicle Routeing and Depot Staffing." Journal of the Operational Research Society. Vol. 46. 1184-1191.

Boyd, B. K. and J. Fulk. (1996). "Executive Scanning and Perceived Uncertainty: A Multidimensional Model." Journal of Management. Vol. 22. No. 1. 1-21.

Burke, R. R. (1996, March/April). "Virtual Shopping: Breakthrough in Marketing Research." Harvard Business Review. Vol. 74. No. 2. 120-131.

Campbell, J. F. and A. Langevin. (1995). "The Snow Disposal Assignment Problem." Journal of the Operational Research Society. Vol. 46. 919-929.

Chau, P. Y. K. and P. C. Bell. (November 1994). "Decision Support for the Design of a New Production Plant Using Visual Interactive Simulation." Journal of the Operations Research Society. Vol. 45. No. 11. 1273-1284.

Chapman, R. G. (1995). BRANDS: A Marketing Game. Englewood Cliffs, NJ: Prentice Hall.

De Gooijer, I. G. and K. Kumar. (1992). "Some Recent Developments in Non-Linear Modelling, Testing, and Forecasting." International Journal of Forecasting. Vol. 8. 135-156.

Donaldson, R. G. and M. Kamstra. (1996, Jan). "Forecast Combining with Neural Networks." Journal of Forecasting. Vol. 15. No. 1. 49-61.

Farrington, P. A., B. J. Schroer and J. Wang. (1995). "Front-End System for Modeling Modern Apparel Manufacturing Systems." Computers in Industrial Engineering. Vol. 28. No. 2. 267-277.

Glover, F. (1989). "Tabu Search: Part I." ORSA Journal on Computing. Vol. 1. No. 3. 190-206.

Glover, F. (1990a). "Tabu Search: Part II." ORSA Journal on Computing. Vol. 2. No. 1. 190-206.

Glover, F. (1990b). "Tabu Search: A Tutorial." Interfaces. Vol. 20. No. 4. 74-94.

Glover, F., E. Taillard and D. de Werra. (1993). "A User’s Guide to Tabu Search." Annals of OR. Vol. 41. No. 1-4. 3-28.

Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization and Machine Learning. Reading, MA: Addison-Wesley.

Goldratt, E. M. (1996). "Line Simulators Manual." in the "Jonah Course Certification in the Theory of Constraints." New Haven, CT: Abraham Goldratt Institute.

Gordon, J., M. Hequet, M. Picard and D. Stamps. (1996, April). "Virtual Reality Gets Real." Training. Vol. 33. No. 4. 15-16.

Grauer, R. T. and M. Barber. (1996b). Exploring Microsoft Office Professional for Windows 95, Volume II, Version 7.0. Englewood Cliffs, NJ: Prentice Hall, Inc.

Grauer, R. T. and M. Barber. (1996c). Exploring Microsoft Excel 7.0 for Windows 95. Englewood Cliffs, NJ: Prentice Hall, Inc.

Grauer, R. T. and M. Barber. (1996d). Exploring Microsoft Office Professional for Windows 3.1. Englewood Cliffs, NJ: Prentice Hall, Inc.

Grauer, R. T. and M. Barber. (1994). Exploring Microsoft Excel 5.0 for Windows. Englewood Cliffs, NJ: Prentice Hall, Inc.

Hung, M. S., W. O. Rom and A. D. Waren. (1994). Optimization with IBM OSL. Danvers, MA: boyd & fraser publishing company.

Jain, B. A. and B. N. Nag. (1995). "Artificial Neural Network Models for Pricing Initial Public Offerings." Decision Sciences Vol. 26, No. 3. 283-302.

Labys, W. C. (1990). "Evaluating Models." Energy. Vol. 15. No. 3-4. 155-162.

Law, A. M. and W. D. Kelton. (1991). Simulation Modeling and Analysis. 2nd ed. New York: McGraw-Hill.

Lim, J. S. and M. O’Connor. (1996). "Judgmental Forecasting with Interactive Forecasting Support Systems. Decision Support Systems. Vol. 16. 339-357.

Lindo Systems, Inc. (1996b). What’sBest! the spreadsheet solver. Danvers, MA: boyd & fraser publishing company.

Minehan, M. (1996, August)."Virtual Reality: The Next Step in Training." HR Magazine. Vol. 41. No. 8. 144.

Nazareth, D. L. (1993, February). "Integrating MCDM and DSS: Barriers and Counter Strategies." INFOR.

Palvia, S., and S. R. Gordon. (1992, October). "Tables, Trees, and Formulas in Decision Analysis." Communication of the ACM.

Pirlot, M. (1996, August 9). "General Local Search Methods." European Journal of Operational Research. Vol. 92. No. 3. 493-511.

Raiffa, H. (1970). Decision Analysis: Introductory Lectures on Choices Under Uncertainty. Reading, MA: Addison-Wesley.

Sage, A. P. (1989). Decision Support Systems Engineering. New York: Wiley.

Savage, S. L. (1993). Fast QM: Fundamental Analytic Spreadsheet Tools for Quantitative Analysis. New York: McGraw-Hill.

Schrage, L. (1996). Optimization Modeling with Lindo. 5th ed. Chicago, IL: Lindo Systems, Inc.

Shim, J. K. (1994). Strategic Business Forecasting: The Complete Guide to Forecasting Real-World Company Performance. Chicago, IL: Probus Publishing Company.

Sinclair, M. (1993, September). "Comparison of the Performance of Modern Heuristics for Combinatorial Optimization on Real Data." Computers & Operations Research. Vol. 20. No. 7. 687-695.

Stewart, G. (19094, June). "Forecasting the Future." Byte. (http://www.byte.com/art/9406/sec9/art6.htm)

Swain, J. J. (1993, December). "Simulation Software Survey." OR/MS Today.

Tavana, M. and S. Banerjee. (1995). "Strategic Assessment Model (SAM): A Multiple Criteria Decision Support System for Evaluation of Strategic Alternatives. Decision Sciences. Vol. 26. No. 1. 119-143.

Teach, R. D. (1990). "Demand Equations for Business Simulations with Market Segments." Simulation and Gaming. Vol. 21. No. 4. 423-442.

Thalmann, N. and D. Thalmann. (1997). Interactive Computer Animation. Upper Saddle River, NJ: Prentice Hall PTR.

Thierauf (1995). Virtual Reality Systems for Business. Westport, CT: Quorum Books.

Tolvanen, J. - P. and K. Lyytinen. (1994). "Modeling Information Systems in Business Development: Alternative Perspective on Business Process Re-engineering." The IFIP Trans A Comput Sci Teechnol. No. A-54, 567-579.

Walkenbach, J. (1992, June). "Spreadsheets: The Next Generation." PC World.

Watson, H. J., and J. H. Blackstone, Jr. (1989). Computer Simulation, 2nd ed., New York: Wiley.

Whitten, J., L. D. Bentley and V. M. Barlow. (1994). Systems Analysis & Design Methods. 3rd ed. Burr Ridge, IL: Richard D. Irwin, Inc.

Wittkemper, H-G. and M. Steiner. (1996, May). "Using Neural Networks to Forecast the Systematic Risk of Stocks." European Journal of Operational Research. Vol. 90. No. 3. 577-588.

Animation

Bayarri, S., M. Fernandez and M. Perez. (1996 May). Virtual Reality for Driving Simulation. Communications of the ACM. Vol. 39. No. 5. 72-76.

Hurrion, R. D. (1993 July). Using 3D Animation Techniques to Help with the Experimental Design and Analysis Phase of a Visual Interactive Simulation Project. Journal of the Operational Research Society. Vol. 44. No. 7. 693-700.

Thalmann, N. and D. Thalmann. (1997). Interactive Computer Animation. Upper Saddle River, NJ: Prentice Hall PTR.

Visual Interactive Simulation

Robinson, S. L. (1994, February). "An Introduction to Visual Interactive Simulation in Business." International Journal of Information Management. Vol. 14. No. 1. 13-23.

Lyu, Jr. J. and A. Gunasekaran. (1993). Developing a Visual Interactive Simulation Model for Flexible Manufacturing Systems. International Journal of Operations & Production Management. Vol. 13. No. 6. 59-67.

Hansen, G. (1996). Automating Business Process Re-engineering: Using the Power of Visual Simulation Strategies to Improve Performance and Profit. 2nd ed. Upper Saddle River, NJ: Prentice Hall PTR.

Rohrer. M. (1996 May/June). "Visualization and its Importance in Manufacturing Simulation." Industrial Management. Vol. 38. No. 3. 15-18.

Eriksson, I. and A. Finn@ s. (1991). "Creating a Visual Simulation Model of an Inventory System." in van den Besselaar, P., A. Clement and P. J@ rvinen. (eds.). Information System, Work and Organization Design. 239-257.

(10/11/96, 12:19:14 pm.)

"Spatial Databases and Automated Construction of Virtual Worlds
www-cgi.cs.cmu.edu/afs/cs/usr/maps/www/research/databases.html, 10/11/96, 12:18:01 pm

Thav, O. (1996, June). "Insights via Interactivity: The Right Simulation Package Furthers Corporate Goals." Manufacturing Systems. Vol. 14. No. 6. 36-37.

Environmental Scanning / Analysis:

Auster, E. and C. W. Choo. (1994, September/October). "How Senior Managers Acquire and Use Information in Environmental Scanning." Information Processing & Management. Vol. 30. No. 5. 607-618.

Boyd, B. K. and J. Fulk. (1996). "Executive Scanning and Perceived Uncertainty: A Multidimensional Model." Journal of Management. Vol. 22. No. 1. 1-21.

Costa, J. (1995). "An Empirically-based Review of the Concept of Environmental Scanning." International Journal of Contemporary Hospitality Management. Vol. 7. No. 7. 4-9.

Matthews, C. H. and S. G. Scott. (1995, October). "Uncertainty and Planning in Small and Entrepreneurial Firms: An Empirical Assessment." Journal of Small Business Management. Vol. 33. No. 4. 34-52.

Xu, X. -Z. and G. R. Kaye. (1995). "Building Market Intelligence Systems for Environment Scanning." Logistics Information Management. Vol. 8. No. 2. 22-29.

Model Management

Blanning, R. W. (1993). "Model Management Systems: An Overview." Decision Support Systems. Vol. 9. 9-18.

Chang, A-M., C. W. Holsapple and A. B. Whinston (1993). "Model Management Issues and Directions." Decision Support Systems. Vol. 9. 19-37.

Liang, T. P. and B. Konsynski (1993). "Modeling by Analogy: Use of Analogical Reasoning in Model Management Systems." Decision Support Systems. Vol. 9. 113-125.

BPR Modeling

Tolvanen, J. - P. and K. Lyytinen. (1994). "Modeling Information Systems in Business Development: Alternative Perspective on Business Process Re-engineering." The IFIP Trans A Comput Sci Technol. No. A-54, 567-579

Barnett, W. A. Presley, M. Johnson and D. H. Liles. (1994, October). "Architecture for the Virtual Enterprise." Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Part 1. San Antonio, TX.

Briccarello, P., G. Bruno and E. Ronco. (1995). "REBUS: An Object-oriented Simulator for Business Processes." Proceedings of the IEEE Annual Simulation Symposium, Los Alamitos, CA. 269-277.

Simulation / Modeling and Expert System - BPR

Archetti, F., E. Fagiuoli, P. Confalonieri and F. Zannetti. (1992). "A Knowledge-based diagnostic System for the Business-Process Ordering of a Plant." Expert Systems Applications. Vol. 4. No. 3. 315-322.

Database Design Modeling and Heuristics:

Cerpa, N. (1995). " Pre-physical Data Base Design Heuristics." Information Management. Vol. 28. No. 6. 351-359.

Modeling - Evaluating

Labys, W. C. (1990). "Evaluating Models." Energy. Vol. 15. No. 3-4. 155-162.

To build confidence in them.

Modeling - Database

Bohoris, G. A. and J. M. Thomas. (1995). "A Heuristic for Vehicle Routeing and Depot Staffing." Journal of the Operational Research Society. Vol. 46. 1184-1191.

Campbell, J. F. and A. Langevin. (1995). "The Snow Disposal Assignment Problem." Journal of the Operational Research Society. Vol. 46. 919-929.

Kilov, H. and L. Cuthbert. (1995). "Model for Document Management." Computer Communications. Vol. 18. No. 6. 408-417.

Modeling Languages

Brooke, A., D. Kendrick and A. Meeraus. (1992). GAMS Release 2.25: A User’s Guide. South San Francisco, CA: The Scientific Press.

Fourer, R., D. M. Gay and B. W. Kernighan. (1993). AMPL A Modeling Language for Mathematical Programming. South San Francisco, CA: The Scientific Press.

Lindo Systems, Inc. (1996). LINGO: Optimization Modeling Language. Chicago, IL: Lindo Systems, Inc.

Forecasting

Ainscough, T. L. and J. E. Aronson. (1997a). "A Neural Network Approach for the Analysis of Scanner Data." Journal of Consumer Retailing. forthcoming.

Ainscough, T. L. and J. E. Aronson. (1997b). "Neural Networks in Business." Journal of Business and Entrepreneurship. forthcoming.

Broadie, M. and P. Glasserman. (1996, February). "Estimating Security Price Derivatives using Simulation. Management Science. Vol. 42. No. 2. 269-285.

Donaldson, R. G. and M. Kamstra. (1996, Jan). "Forecast Combining with Neural Networks." Journal of Forecasting. Vol. 15. No. 1. 49-61.

DeLurgio, S. and C. Bhame. (1991). Forecasting Systems for Management. Homewood, IL: Business One Irwin.

de Gooijer, I. G. and K. Kumar. (1992). "Some Recent Developments in Non-Linear Modelling, Testing, and Forecasting." International Journal of Forecasting. Vol. 8. 135-156.

Coates, J. F. (1995 September/October). "How to Recognize a Sound Technology Forecast." Research-Technology Management. Vol. 38. No. 5. 11-12.

Collopy, F. and J. S. Armstrong. (1992). "Rule-Based Forecasting: Development and Validation of an Expert Systems Approach to Combining Time Series Extrapolations." Management Science. Vol. 38. No. 10. 1394-1414.

Geurts, M. D. and ED. B. Whitlark. (1996, August). "Using Diffusion Models to Forecast New Product Sales." Marketing & Research Today. Vol. 24. No. 3. 202-204.

Hill, T., M. O’Connor and W. Remus. (1996, July). "Neural Network Models for Time Series Forecasts. Management Science. Vol. 42. No. 7. 1082-1092.

Jain, B. A. and B. N. Nag. (1995) "Artificial Neural Network Models for Pricing Initial Public Offerings." Decision Sciences Vol. 26, No. 3. 283-302.

Jain, C. L. (1996 Summer). "Monthly Corporate Forecast Meeting." Journal of Business Forecasting Methods & Systems. Vol. 15. No. 2. p. 2, 28.

Rao, R. D. and J. K. Parikh. (1996 June). "Forecast and Analysis of Demand for Petroleum Products in India. Energy Policy. Vol. 24. No. 6. 583-592.

Wittkemper, H-G. and M. Steiner. (1996, May). "Using Neural Networks to Forecast the Systematic Risk of Stocks." European Journal of Operational Research. Vol. 90. No. 3. 577-588.

Simulation

Aggarwal, A. K. (1990). "Simulation as a DSS Modeling Technique." Information Management. Vol. 19. No. 5. 295-305.

Bertsche, D., C. Crawford and S. E. Macadam. (1996). "Is Simulation Better than Experience?" McKinsey Quarterly. No. 1. 50-57.

Chapman, R. G. (1995). BRANDS: A Marketing Game. Englewood Cliffs, NJ: Prentice Hall.

Cho, K., I. Moon and W. Yun. (1996, July). "System Analysis of a multi-product, small-lot-sized Production by Simulation: A Korean Motor Factory Case." Computers & Industrial Engineering. Vol. 30. No. 3. 347-356.

Ezingeard, J-N. and P. Race. (1995). "Spreadsheet Simulation to Aid Capacity Management of Batch Chemical Processing using JIT Pull Control." International Journal of Production & Operations Management. Vol. 15. No. 10. 82-88.

Fishwick, P. (1995). Simulation Model Design and Execution: Building Digital Worlds. Englewood Cliffs, NJ: Prentice Hall.

Pidd, M., F. N. de Silva and R. W. Eglese. (1996 May 10). "A Simulation Model for Emergency Evacuation." European Journal of Operational Research. Vol. 90. No. 3. 413-419.

Rao, R. D. and J. K. Parikh. (1996 June). "Forecast and Analysis of Demand for Petroleum Products in India. Energy Policy. Vol. 24. No. 6. 583-592.

Teach, R. D. (1990). "Demand Equations for Business Simulations with Market Segments." Simulation and Gaming. Vol. 21. No. 4. 423-442.

Object Oriented Simulations

Ninios, P. K. Vlahos and D. W. Bunn. (1995). "Industrial Simulation: System Modelling with an Object Oriented / DEVS Technology. European Journal of Operational Research. Vol. 81. 521-534.

Simulation for Training and Games

Bertsche, D., C. Crawford and S. E. Macadam. (1996). "Is Simulation Better than Experience?" McKinsey Quarterly. No. 1. 50-57.

Hill, L. D. and K. L. Bender. (1996, July/August). "Market Simulation for Teaching Commodity Trading Skills." Agribusiness. Vol. 12. No. 4. 403-410.

Sim Wireless. (1996, July). "Sim Wireless: The Simulation Game for Serious Decision Makers." Telecommunications (Americas Edition). Vol. 30. No. 7. 34.

Trippi, R. R. (1996 May/June). "The AIM Game: Learning Investment Management Principles through Monte Carlo Simulation." Interfaces. Vol. 26. No. 3. 66-76.

W5.20 Note

Note: On Exercise 16 (in the text) Skip part c. Sorry for any inconvenience.

Back to the Top

 ©1999 Prentice-Hall, Inc. A division of Pearson Education Upper Saddle River, New Jersey 07458
Legal Statement