100% found this document useful (1 vote)
363 views29 pages

Kelton (2007) - Chapter 7

In this chapter, we'll expand on several concepts that allow you to do more detailed modeling. The data portion required for the model is built In Section 7.1. In Section 7.2, we resume the topic of statistical analysis of the output data.

Uploaded by

shawomin
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
363 views29 pages

Kelton (2007) - Chapter 7

In this chapter, we'll expand on several concepts that allow you to do more detailed modeling. The data portion required for the model is built In Section 7.1. In Section 7.2, we resume the topic of statistical analysis of the output data.

Uploaded by

shawomin
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

293

CHAPTER 7

Intermediate Mode'ing and Steady·State Statistica' Analysis

Many of the essential elements of modeling with Arena were covered in Chapters 3, 4, and 5, including the basic use of some of the Basic Process and Advanced Process panel modules, controlling the flow of entities, Resource Schedules and States, Sets, Variables, Expressions, Stations, Transfers, and enhancing the animation. In this chapter, we'll expand on several concepts that allow you to do more detailed modeling. As before, we'll illustrate things concretely by means of a fairly elaborate example. We'll first start by introducing a new example as described in Section 7.1; expressing it in Arena requires the concept of entity-dependent Sequences, discussed in Section 7.1.1. Then in Section 7.1.2, we take up the general issue of how to go about modeling a system, the level of detail appropriate for a project, and the need to pay attention to data requirements and availability. The data portion required for the model is built in Section 7.1.3 and the logical model in Section 7.lA. In Section 7.1.5, we develop an animation, including discussion of importing existing CAD drawings for the layout. We conclude this portion with a discussion on verifying that the representation of a model in Arena really does what you want, in Section 7.1.6.

Continuing our theme of viewing all aspects of simulation projects throughout a study, we resume the topic of statistical analysis of the output data in Section 7.2, but this time it's for steady-state simulations, using the model from Section 7.1.

By the time you read and digest the material in this chapter, you'll have a pretty good idea of how to model things in considerable detail. You'll also be in a position to draw statistically valid conclusions about the performance of systems as they operate in the long run.

7.1 Model 7-1: A Small Manufacturing System

A layout for our small manufacturing system is shown in Figure 7-1. The system to be modeled consists of part arrivals, four manufacturing cells, and part departures. Cells 1, 2, and 4 each have a single machine; Cell 3 has two machines. The two machines at Cell 3 are not identical; one of these machines is a newer model that can process parts in 80% of the time required by the older machine. The system produces three part types, each visiting a different sequence of stations. The part steps and process times (in minutes) are given in Table 7-1. All process times are triangularly distributed; the process times given in Table 7-1 at Cell 3 are for the older (slower) machine.

The interarrival times between successive part arrivals (all types combined) are exponentially distributed with a mean of 13 minutes; the first part arrives at time O. The distribution by type is 26%, Part 1; 48%, Part 2; and 26%, Part 3. Parts enter from the

294 CHAPTER 7

Figure 7-1. The Small Manufacturing system Layout

h the system. For now

left, exit at the right, and move only in a clockwise direction thrOu~ inutes regard~

we'll also assume that the time to move between any pair of cells ~s ~o m resou;ce utili

I f di .. 11 t statIstIcs on 1 l-

ess 0 the istance (we'll fIX this up later). We want to co ec. t from entry t

zation, time and number in queue, as well as cycle time (time 1U sys em, 0

exit) by part type. Initially, we'll run our simulation for 32 hours.

7.1.1 New Arena Concepts . Arena concepts. The There are several characteristics of this problem that reqUIre new':ffi t process plans first characteristic is that there are three part types that folloW d~ . er~~Ough the same through the system. In our previous models, we simply sent all entItIes ith an automatic sequence of stations. For this type of system, we need a process plan W

routing capability. ot identical-the

The second characteristic is that the two machines in Cell 3 are need to be able to newer machine can process parts faster than the old machine. Here we n

distinguish between these two machines. . . h the system. In Our

The third characteristic is the nature of the flow of entItles throug lished using the

previous models, the flow of entities through the system was acc0:UP an entity is sent direct Connect or direct Route option. When you use the co:mect ?1tlDn'time advance in immediately to the next module, according to the connectwn, WIt no

Table 7-1. Part Routings and Process Times

Part Type

Cell/Time Cell/Time
Cell/Time Cell/Time Cell/Time --
1 2 3 4
~
6,8, 10 5,8, 10 15,20,25 3
2
2 4 9 12 27,33,39
11,13,15 4,6,8 15,18,21 ~
2 3
7,9,11 7, 10, 13 18,23,28 2

3

INTERMEDIATE MODELING AND STEADy-STATE STATISTICAL ANALYSIS 295

the simulation. If we used the Connect option in this new model, we would have to include a number of Decide modules to direct the parts to the correct next station in the sequence followed by Route modules both to model the two-minute transfer time and to enable animation of the part movements. As you might expect, there is an Arena concept, sequences, that allows easy modeling of the flow of entities through the system while allowing for the part transfer time and enabling its animation.

Many systems are characterized by entities that follow predefined, but different paths through the system. Most manufacturing systems have part or process plans that specify a list of operations each part type must complete before exiting the system. Many service systems have similar types of requirements. For example, a model of passenger traffic in an airport may require different paths through the airport depending on whether the passengers check baggage or have only carry-on pieces, as well as whether the passengers have domestic or international flights.

Arena can send entities through a system automatically according to a predefined sequence of station visitations. The Sequence data module, on the Advanced Transfer panel, allows us to define an ordered list of Stations that can include assignments of attributes or variables at each station. To direct an entity to follow this pattern of station visitations, we assign the sequence to the entity (using a built-in sequence attribute, described below) and use the "By Sequence" option in the Route module when we transfer the entity to its next destination.

As the entity makes its way along its sequence, Arena will do all the necessary bookkeeping to keep track of where the entity is and where it will go next. This is accomplished through the use of three special, automatically defined Arena attributes:

Entity.Station (or M), Entity.Sequence (or NS), and Entity.JobStep (or IS). Each entity has these three attributes, with the default values for newly created entities being 0 for each. The Entity.Station attribute contains the current station location of the entity or the station to which the entity is currently being transferred. The Entity.Sequence attribute contains the sequence the entity will follow, if any; you need to assign this to each entity that will be transferring via a sequence. The Entity.Jobstep attribute specifies the entity's position within the sequence, so normally starts at 0 and then is incremented by 1 as the entity moves to each new station in its sequence.

We first define and name the list of stations to be visited for each type of entity (by part type, in our example) using the Sequence data module in the Advanced Transfer panel. Then, when we cause a new part to arrive into the system, we associate a specific Sequence with that entity by assigning the name of the sequence to the entity's Entity.Sequence attribute, NS.1 When the entity is ready to transfer to the next station in its sequence, we select the By Sequence option in the Destination Type field of the module we're using to transfer the entity to the next station. At this point during the run, Arena first increments the EntityJobstep attribute (IS) by 1. Then it retrieves the destination station from the Sequence based on the current values for the Entity. Sequence

I Arena uses M, NS, and IS internally as names for these attributes, but provides the aliases Entity.Station, Entity.Sequence, and Entity.Jobstep in the pull-down list.

296 CHAPTER 7

and Entity.Jobstep attributes. Any optional assignments are made (as defined in the Sequence) and the entity's Station attribute (Entity. Station) is set to the destination station. Finally, Arena transfers the entity to that station.

Typically, an entity will follow a sequence through to completion and will then exit the model. However, this is not a requirement. The Entity.J obstep attribute is incremented only when the entity is transferred using the By Sequence option. You can temporarily suspend transfer via the sequence, transfer the entity directly to another station, and then re-enter the sequence later. This might be useful if some of the parts are required to be reworked at some point in the process. Upon completion of the rework, they might reenter the normal sequence.

You can also reassign the sequence attributes at any time. For example, you might handle a part failure by assigning a new Entity.Sequence attribute and resetting the Entity.Jobstep attribute to O. This new Sequence could transfer the part through a series of stations in a rework area. You can also back up or jump forward in the sequence by decreasing or increasing the Entity.Jobstep attribute. However, caution is advised as you must be sure to reset the Entity.Jobstep attribute correctly, remembering that Arena will first increment it, then look up the destination station in the sequence.

As indicated earlier, attribute and variable assignments can also be made at each step in a sequence. For example, you could change the entity picture or assign a process time to a user-defined attribute. For our small manufacturing model, we'll use this option to define some of our processing times that are part- and station-specific.

7.1.2 The Modeling Approach

The modeling approach to use for a specific simulation model will often depend on the system's complexity and the nature of the available data. In simple models, it's usually obvious what modules you'll require and the order in which you'll place them. But in more complex models, you'll often need to take a considerable amount of care developing the proper approach. As you learn more about Arena, you'll find that there are often a number of ways to model a system or a portion of a system. You will often hear experienced modelers say that there's not just a single, correct way to model a system. There are, however, plenty of wrong ways if they fail to capture the required system detail correctly.

The design of complex models is often driven by the data requirements of the model and what real data are available. Experienced modelers will often spend a great deal of time determining how they'll enter, store, and use their data, and then let this design determine which modeling constructs are required. As the data requirements become more demanding, this approach is often the only one that will allow the development of an accurate model in a short period of time. This is particularly true of simulation models of supply-chain systems, warehousing, distribution networks, and service networks. For example, a typical warehouse can have hundreds of thousands of uniquely different items, called SKUs (Stock-Keeping Units). Each SKU may require data characterizing its location in the warehouse, its size, and its weight, as well as reorder or restocking data for this SKU. In addition to specifying data for the contents of the warehouse, you also have customer order data and information on the types of storage devices or equipment

INTERMEDIATE MODELING AND STEADy-STATE STATISTICAL ANALYSIS 297

that hold the SKUs. If your model requires the ability to change SKU locations, storage devices, restocking options, and so forth, during your experimentation, the data structure you use is critical. Although the models we'll develop in this book are not that complicated, it's always advisable to consider your data requirements before you start your modeling.

For our small manufacturing system, the data structure will, to a limited extent, affect our model design. We will use Sequences to control the flow of parts through the system, and the optional assignment feature in the Sequence data module to enter attributes for part process times for all but Cell 1. We'll use an Expression to define part process times for Cell 1. The part transfer time and the 80% factor for the time required by the new machine in Cell 3 will exploit the Variables concept. Although we don't normally think of Sets as part of our data design, their use can affect the modeling method. In this model, we'll use Sets, combined with a user-defined index, to ensure that we associate the correct sequence and picture with each part type.

First, we'll enter the data modules as discussed earlier. We'll then enter the main model portion, which will require several new modules. Next, we'll animate the model using a CAD or other drawing as a starting point. Finally, we'll discuss briefly the concept of model verification. By now you should be fairly familiar with opening and filling Arena dialog boxes, so we will not dwell on the mundane details of how to enter the information in Arena. In the case of modules and concepts that were introduced in previous chapters, we'll only indicate the data that must be entered. To see the "big picture" of where we're heading, you may want to peek ahead at Figure 7-5, which shows the complete model.

7.1.3 The Data Modules

We start by editing the Sequence data module from the Advanced Transfer panel. Double-click to add a new row and enter the name of the first sequence, Part 1 Process Plan. Having entered the sequence name, you next need to enter the process Steps, which are lists of Arena stations. For example, the Part 1 Process Plan requires you to enter the following Arena stations: Cell 1, Cell 2, Cell 3, Cell 4, and Exit System. We have given the Step Names as Part 1 Step 1 through Part 1 Step 5. The most common error in entering sequences is to forget to enter the last step, which is typically where the entity exits the system. If you forget this step, you'll get an Arena run-time error when the first entity completes its process route and Arena is not told where to send it next. As you define the Sequences, remember that after you've entered a station name once, you can subsequently pick it from station drop-down lists elsewhere in your model. You'll also need to assign attribute values for the part process times for Cells 2, 3, and 4. Recall that we'll define an Expression for the part process times at Cell 1, so we won't need to make an attribute assignment for process times there.

Display 7-1 shows the procedure for Sequence Part 1 Process Plan, Step Part 1 Step 2, and Assignment of Process Time. Using the data in Table 7-1, it should be fairly straightforward to enter the remaining sequence steps. Soon we'll show you how to reference these sequences in the model logic.

Sequence

298 CHAPTER 7

Name

---

Part 1 Process Plan

Steps

Station Name Cell 2

__ . Step Nam<: Par~_l St~p 2

Assignments Assignment Type Attribute Name Value

Attribute Process Time TRIA(S, 8,10)

Display 7-1. The Sequence Data Module

Next, we'll define the expression for the part process times at Cell 1, using the Expression data module in the Advanced Process panel. The expression we want will be called Cell 1 Times, and it will contain the part-process-time distributions for the three parts at CellI. We could just as easily have entered these in the previous Sequences module, but we chose to use an expression so you can see several different ways to assign the processing times. We have three different parts that use Cell 1, so we need an arrayed expression with three rows, one for each part type. Display 7-2 shows the data for this module.

Next, we'll use the Variable data module from the Basic Process panel to define the machine-speed factor for Cell 3 and the transfer time. Prior to defining our Factor variable, we made the following observation and assumption. The part-process times for Cell 3, entered in the Sequence data module, are for the old machine. We'll assume that

_- -- ----- _-

Expression

Name CellI Times

______ .. ._ ... R<:>.ws .. _ ... .. . _ .. _~ .. _. .. _._. ~ __ . __ . __ . __ .. _.-- __ --_

Expression Values

Expression Value TRIA ( 6, 8, 10)

Expression Value TRIA ( 11, 13, 15)

_________________ Exp~~~si~~_~~l~e :r~!\_(7. :._~_g_L.~~_)____ _

Display 7-2. The Expression for Cell 1 Part Process Times

INTERMEDIATE MODELING AND STEADy-STATE STATISTICAL ANALYSIS 299

Name Rows

Factor 2

Initial Values Initial Value Initial Value

0.8 1.0

Name

Transfer Time

Initial Values Initial Value

2

Display 7·3. The Factor and Transfer Time Variables

the new machine will be referenced as I and the old machine will be referenced as 2. Thus, the first factor value is 0.8 (for the new machine), and the second factor is 1.0 (for the old machine). The transfer-time value (which doesn't need to be an array) is simply entered as 2. This allows us to change this value in a single place at a later time. If we thought we might have wanted to change this to a distribution, we would have entered it as an Expression instead of a Variable. Display 7-3 shows the required entries.

We'll use the Set data module from the Basic Process panel to form sets for our Cell 3 machines, part pictures, and entity types. The first is a Resource set, Cell 3 Machines, containing two members: Cell 3 New and Cell 3 Old. Our Entity Picture set is named Part Pictures and it contains three members: Picture. Part 1, Picture. Part 2, and Picture. Part 3. Finally, our Entity Type set is named Entity Types and contains three members: Part 1, Part 2, and Part 3.

As long as we're creating sets, let's add one more for our part sequences. If you attempt to use the Set module, you'll quicldy find that the types available are Resource, Counter, Tally, Entity Type, and Entity Picture. You resolve this problem by using the Advanced Set data module found in the Advanced Process panel. This module lists three types: Queue, Storage, and Other. The Other type is a catch-all option that allows you to form sets of almost any similar Arena objects. We'll use this option to enter our set named Part Sequences, which contains three members: Part 1 Process Plan, Part 2 Process Plan, and Part 3 Process Plan.

Before we start placing logic modules, let's open the Run> Setup dialog box and set our Replication Length to 32 hours and our Base Time Units as Minutes. We also selected Edit> Entity Pictures to open the Entity Picture Placement window, where we created three different pictures-Picture. Part 1, Picture. Part 2, and Picture. Part 3. In our example, we copied blue, red, and green balls; renamed them; and placed a text object with 1, 2, and 3, respectively, to denote the three different part types. Having defined all of the data modules, we're now ready to place and fill out the modules for the main model to define the system's logical characteristics.

7.1.4 The Logic Modules

The main portion of the model's operation will consist of logic modules to represent part arrivals, cells, and part departures. The part arrivals will be modeled using the four modules shown in Figure 7-2. The Create module uses a Random (Expo) distribution with a mean of l3 minutes to generate the arriving parts.

300 CHAPTER 7

Figure 7-2. The Part Arrival Modules

At this point, we have not yet associated a sequence with each arriving entity. We make this association in the Assign module as shown in Display 7-4. These assignments serve two purposes: they determine which part type has arrived, and they define an index, Part Index, for our sets that will allow us to associate the proper sequence, entity type, and picture with each arrival. We first determine the part index, or part type, with a discrete distribution. The distribution allows us to generate certain values with given probabilities. In our example, these values are the integers 1,2, and 3 with probabilities 26%, 48%, and 26%, respectively. You enter these values in pairs-cumulative probability and value. The cumulative probability for the last value (in our case, 3) should be 1.0. In general, the values need not be integers, but can take on any values, including negative numbers. The part index values of 1,2, and 3 allow us not only to refer to the part type, but in this case, they also allow us to index into the previously defined set Part Sequences so that the proper sequence will be associated with the part. To do so, we assign the proper sequence to the automatic Arena Entity.Sequence attribute by using the Part Index attribute as an index into the Part Sequences set.

We also need to associate the proper entity type and picture for the newly arrived part. We do this in the last two assignments by using the Part Index attribute to index into the relevant sets. Recall that we created the Entity Types set (consisting of Part 1, Part 2, and Part 3). We also defined three pictures (Picture. Part 1, Picture. Part 2, and Picture. Part 3) and grouped them into a set called Part Pictures. You can think of these sets as an arrayed variable (one-dimensional) with

______ !'l" am~______ ~~~_~ gn_~":E.~ __ ~_XP_::_"::!:_~~~Sfll:en~~ _

Assignments Type Attribute Name New Value Type Attribute Name New Value Type Attribute Name New Value Type Attribute Name New Value

Attribute

Part Index

DI se ( O. 26 ,1 r O. 74 ,2 , 1. 0 ,3 ) Attribute

Entity. Sequence

Part Sequences ( Part Index j Attribute

Entity. Type

Enti ty Types ( Part Index) Attribute

Entity. Picture

Part Pictures Part Index)

----------------------------

Display 7-4. The Assign Module: Assigning Part Attributes

INTERMEDIATE MODELING AND STEADy-STATE STATISTICAL ANALYSIS 301

the index being the Part Index attribute we just assigned. This is true in this case because we have a one-to-one relationship between the part type (Part Index) and the sequence, picture, and entity type. An index of 1 implies Part 1 follows the first sequence, etc. (Be careful in future models because this may not always be true. It is only true in this case because we defined our data structure so we would have this one-to-one relationship.)

In this case, the completed Assign module determines the part type, and then assigns the proper sequence, entity type, and picture. Note that it was essential to define the value of Part Index first in this Assign module, which performs multiple assignments in the order listed, since the value of Part Index determined in the first step is used in the later assignments. We're now ready to send our part on its way to the first station in its sequence.

We'll accomplish this with the Route module from the Advanced Transfer panel.

Before we do this, we need to tell Arena where our entity, or part, currently resides (its current station location). If you've been building your own model along with us (and paying attention), you right have noticed the attribute named Entity. Station on the drop-down list in the Assign module that we just completed. (You can go back and look now if you like.) You right be tempted to add an assignment that would define the current location of our part. Unfortunately if you use this approach, Arena will return an error when you try to run your model; instead, we use a Station module as described next.

Our completed model will have six stations: Order Release, Cell 1, Cell 2, Cell 3, Cell 4, and Exit System. The last five stations were defined when we filled in the information for our pari sequences. The first station, Order Release, will be defined when we send the entity through a Station module (Advanced Transfer panel), which will define the station and tell Arena that the current entity is at that location. Our completed Station module is shown in Display 7-5.

We're finally ready to send our part to the first station in its sequence with the Route module (Advanced Transfer panel). The Route module transfers an entity to a specified station, or the next station in the station visitation sequence defined for the entity. A Route Time to transfer to the next station may be defined. In our model, the previously defined variable Transfer Time is entered for the Route Time; see Display 7-6. We selected the By Sequence option as the Destination Type. This causes the Station Name field to disappear, and when we TUn the model, Arena will route the aniving entities according to the sequences we defined and attached to entities after they were created.

Name Station Type

______ St_a!i_<?_n_N __ am_e .Q!_d_e_r_B:~]-_e_a_s_e: _

Order Release Station Station

Display 7-5. The Station Module

302 CHAPTER 7

Name Start Sequence

Route Time Transfer Time

Units Minutes

__________ _p~~tinati0rl:Iype ~x__§e~E:"E~~ _

Display 7-6. The Route Module

Figure 7-3. Cell 1 Logic Modules

Now that we have the arriving parts being routed according to their assigned part sequences, we need to develop the logic for our four cells. The logic for all four cells is essentially the same. A part arrives to the cell (at a station), queues for a machine, is processed by the machine, and is sent to its next step in the part sequence. All four of these cells can be modeled easily using the Station - Process - Route module sequence shown in Figure 7-3 (for Cell I),

The Station module provides the location to which a part can be sent. In our model, we are using sequences for all part transfers, so the part being transferred would get the next location from its sequence. The entries for the Station module for Cell I are shown in Display 7-7.

A part arriving at the Cell I station is sent to the following Process module (using a direct connect). For the Expression for the delay time, we've entered the previously defined arrayed expression Cell I Times using the Part Index attribute to reference the appropriate part-processing time in the arrayed expression. This expression will generate a sample from the triangular distribution with the parameters we defined earlier. The remaining entries are shown in Display 7-8.

Name

Station Type Station Name

Cell I Station Station

cci r i

Display 7-7. The Cell 1 Station Module

INTERMEDIATE MODELING AND STEADy-STATE STATISTICAL ANALYSIS 303

Name Action

Cell I Process

Resources

Type Resource

Resource Name Cell 1 Machine

._. Q_~~~ity _ .... _._. ... _~ ... . __ ... _. . __ . . __ ... _

Delay Type Expression

Units Minutes

__ ~){£l:ession .~ellJ:_ Times ( Pa:r.:_~ Inde~l_ __

Display 7·8. Cell 1 Process Module

Upon completion of the processing at Cell 1, the entity is directed to the Route module, described in Display 7-9, where it is routed to its next step in the part sequence. Except for the Name, this Route module is identical to the Route module we used to start our sequence at the Order Release station in Display 7-6.

The remaining three cells are very similar to Cell l, so we'll skip the details. To create their logic, we copied the three modules for Cell 1 three times and then edited the required data. For each of the new Station and Route modules, we simply changed all occurrences of Cell 1 to Cell 2, Cell 3, or Cell 4. We made the same edits to the three additional Process modules and changed the delay Expression for Cells 2 and 4 to Process Time. Recall that in the Sequence module we defined the part-processing times for Cells 2, 3, and 4 by assigning them to the entity attribute Process Time. When the part was routed to one of these cells, Arena automatically assigned this value so it could be used in the module.

At this point, you should be aware that we've somewhat arbitrarily used an Expression for the Cell 1 part-processing times, and attribute assignments in the sequences for the remaining cells. We actually used this approach to illustrate that there are often several different ways to structure data and access them in a simulation model. We could just as easily have incorporated the part-processing times at Cell l into the sequences, along with the rest of the times. We also could have used Expressions for these times at Cells 3 and 4. However, it would have been difficult to use an Expression for these times at Cell 2, because Part 2 visits that cell twice and the processing times are different, as shown in Table 7-1. Thus, we would have had to include in our model the ability to lmow whether the part was on its first or second visit to Cell 2 and define our expression to recognize that. It might be an interesting exercise, but from the modeler's point of view, why not just define these values in the sequences? You should also recognize that this sample model is

---_. __ .-._ ..... _--_. __ ._._ ... _-----_. __ ._----_._._-------

Name

Route Time Units Destination Type

Route from Cell 1 Transfer Time Minutes

By Sequence

Display 7·9. Cell 1 Route Module

304 CHAPTER 7

---------------------

Name Cell 3 Process

Action Seize Delay Release

- __ ._---------------,----------------------------

Resources Type

Set Name Quantity

Save Attribute

Set

Cell 3 Machines 1

Machine Index

--------

Delay Type Expression

Units Minu tes

__ _ExP._r.~~~ion ~:::_oce~~ __ Tirr::: __ ~_Fa_~tc:~_i!'!ach_~E_~ _ _!E_lj_~~1_

Display 7-10. Cell 3 Process Module

relatively small in terms of the number of cells and parts. In practice, it would not be uncommon to have 30 to 50 machines with hundreds of different part types. If you undertake a problem of that size, we strongly recommend that you take great care in designing the data structure since it may make the difference between a success or failure of the simulation project.

The Process module for Cell 3 is slightly different because we have two different machines, a new one and an old one, that process parts at different rates. If the machines were identical, we could have used a single resource and entered a capacity of 2. We made note of this earlier and grouped these two machines into a Set called Cell 3 Machines. Now we need to use this set at Cell 3. Display 7-10 shows the data entries required to complete the Process module for this cell.

In the Resource section, we select Set from the drop-down list for our Resource Type. This allows us to select from our specified Cell 3 Machines set for the Set Name field. You can also Seize a Specific Member of a set; that member can be specified as an entity attribute. For our selection of Resource Set, we've accepted the default selection rule, Cyclical, which causes the entity to attempt to select the first available resource beginning with the successor of the last resource selected. In our case, Arena will try to select our two resources alternately; however, if only one resource is currently available, it will be selected. Obviously, these rules are used only if more than one resource is available for selection. The Random rule would cause a random (equiprobable) selection from among those available, and the Preferred Order rule would select the first (lowest-numbered) available resource in the set. Had we selected this option, Arena would have always used the new machine, if available, because it's the first resource in the set. The remaining rules would apply only if one or more of our resources had a capacity greater than 1.

The Save Attribute option allows us to save the index, which is a reference to the selected set member, in a user-specified attribute. In our case, we will save this value in the Attribute Machine Index. If the new machine is selected, this attribute will be assigned a value of 1, and if the old machine is selected, the attribute will be assigned a value of 2. This numbering is based on the order in which we entered our resources when we defined the set. The Expression for the delay time uses our attribute Process

INTERMEDIATE MODELING AND STEADy-STATE STATISTICAL ANALYSIS 305

Figure 7·4. The Exit System Logic Modules

Time, assigned by the sequence, multiplied by our variable Factor (Machine Index) . Recall that our process times are for the old machine and that the new machine can process parts in 80% of that time. Although it's probably obvious by now how this expression works, we'll illustrate it. If the first resource in the set (the new machine) is selected, Machine Index will be assigned a value of I and the variable Factor (Machine Index) will use this attribute to take on a value of 0.8. If the second machine (the old machine) is selected, the Machine Index is set to 2, the Factor (Machine Index) variable will take on a value of 1.0, and the original process time will be used. Although this method may seem overly complicated for this example, it is used to illustrate the tremendous flexibility that Arena provides. An alternate method, which would not require the variable, would have used the following logical expression:

Process Time * (((Machine Index

1)*0.8) + (Machine Index

2) )

or

Process Time * (1 - ((Machine Index == 1)*0.2)).

We leave it to you to figure out how these expressions work (hint: "a==b" evaluates to I if a equals b and to 0 otherwise).

Having completely defined all our data, the part arrival process, and the four cells, we're left only with defining the parts' exit from the system. The two modules that accomplish tins are shown in Figure 7-4.

As before, we'll use the Station module to define the location of our Exi t System Station. The Dispose module is used to destroy the completed part. Our completed model (although it is not completely animated) appears in Figure 7-5.

!i I·

__ !/ Dispose of Part '

'\\_~ __ t.u

Figure 7·5. The Completed Model

306 CHAPTER 7

At this point, we could run our model, but it would be difficult to tell if our sequences are working properly. So before we start any analysis, we'll first develop our animation to help us determine whether the model is working correctly. Let's also assume that we'll have to present this model to higher-level management, so we want to develop a somewhat fancier animation.

7.1.5 Animation

We could develop our animation in the same manner as we did in Chapter 5-create our own entity pictures, resource symbols, and background based on the picture that was presented in Figure 7-1. However, a picture might already exist that reflects an accurate representation of the system. In fact, it might even exist as a CAD file somewhere in your organization. For instance, the drawing presented in Figure 7-1 was developed using the AutoCAD® package from AutoDesk. Arena supports integration of files from CAD programs into its workspaces. Files saved in the DXF format defined by AutoCAD can be imported directly into Arena. Files generated from other CAD or drawing programs (e.g., Microsoft" Visio'") should transfer into Arena as long as they adhere to the AutoCAD standard DXF format.

If the original CAD drawing is 2-D, you only need to save it in the DXF format and then import that file directly into Arena. Most CAD objects (polygons, etc.) will be represented as the same or similar objects in Arena. If the drawing is 3-D, you must first convert the file to 2-D. Colors are lost during this conversion to 2-D, but they may be added again in AutoCAD or in Arena. This conversion also transforms all objects to lines, so the imported drawing can only be manipulated in Arena as individual lines, or the lines may be grouped as objects. We'll assume that you have a DXF file and refer you to online help (the Importing DXF Files topic) for details on creating the DXF file or converting a 3-D drawing. One word of advice: a converted file is imported into Arena as white lines so if your current background is white, it may appear that the file was not imported. Simply change the Window Background Color (~ .... on the Draw toolbar) or select all the lines and change the line color (&O! .... ).

For our small manufacturing system, we started with a 3-D drawing and converted it to 2-D. We then saved the 2-D file in the DXF format (Model 07-01. dxf). A DXF file is imported into your current model file by selecting File > DXF Import. Select the file, and when you move your pointer to the model window, it will appear as cross hairs. Using the pointer, draw a box to contain the entire drawing to be imported. If the imported drawing is not the correct size, you can select the entire drawing and resize it as required. You can now use this drawing as the start of your animation.

For our animation, you should delete all the lettering and arrows. Then we'll move the Cell 1 queue animation object from the Process module (Cell 1 Process) to its approximate position on the drawing.

Now position your pointer near the upper-left comer of a machine and drag your pointer so the resulting bounding outline encloses the entire drawing. Use the Copy button to copy the drawing of the machine to the clipboard. Now we'll open the Resource Picture Placement window by clicking the Resource button (~.) from the Animate toolbar. Double-click the idle resource symbol and replace the current picture with a

INTERMEDIATE MODELING AND STEADy-STATE STATISTICAL ANALYSIS 307

copy of the contents of the clipboard. Delete the base of the machine and draw two boxes on top of the representation for the top part of the machine. This is necessary as our drawing consists of all lines, and we want to add a fill color to the machine. Now delete all the lines from the original copy and fill the boxes with your favorite color. Copy this new symbol to your library and then copy that symbol to the Busy picture. Save your library, exit the resource window, and finally place the resource. You now have a resource picture that will not change, but we could go back later and add more animation effects.

For the next step, draw a box the same size as the base of the machine and then delete the entire drawing ofthe machine. Fill this new box with a different color and then place the resource symbol on top of this box (you may have to resize the resource symbol to be the correct size). Now move the seize point so that it is positioned at about the center of our new machine. Now make successive copies of our new resource and the machine base, and then change the names. If you follow this procedure, note that you will need to flip the resources for Cells 3 and 4.

To complete this phase of the animation, you need to move and resize the remaining queues. Once this is complete, you can run your animation and see your handiwork. You won't see parts moving about the system (there are no routes yet), but you will see the parts in the queues and at the machines. If you look closely at one of the machines while the animation is running, you'll notice that the parts sit right on top of the entire machine. This display is not what we want. Ideally, the part should sit on top of the base, but under the top part of our machine (the resource). Representing this is not a problem. Select a resource (you can do this in edit mode or by temporarily pausing the run) and use the Bring to Front feature with Arrange> Bring to Front or the button on the Arrange toolbar ('tb). Now when you run the animation, you'll get the desired effect. We're now ready to animate our part movement.

In our previous animations, we basically had only one path through the system so adding the routes was fairly straightforward. In our small manufacturing system, there are multiple paths through the system, so you must be careful to add routes for each travel possibility. For example, a part leaving Cell 2 can go to Cell l , 3, or 4. The stations need to be positioned first; add station animation objects using the Station button on the Animate Transfer toolbar. Next, place the routes. If you somehow neglect to place a route, the simulation will still (correctly) send the entity to the correct station with a transfer time of 2; however, that entity movement will not appear in your animation. Also be aware that routes can be used in both directions. For example, let's assume that you added the route from Cell I to Cell 2, but missed the route from 2 to 1. When a Part 2 completes its processing at Cell 2, Arena looks to route that part to Cell l, routing from 2 to 1. If the route were missing, it would look for the route from 1 to 2 and use that. Thus, you would see that part moving from the entrance of Cell 2 to the Exit of Cell I in a counterclockwise direction (an animation mistake for this model).

If you run and watch your newly animated model, you should notice that occasionally parts will run over or pass each other. This is due to a combination of the data supplied and the manner in which we animated the model. Remember that all part transfers were assumed to be two minutes regardless of the distance to be traveled. Arena

308 CHAPTER 7

sets the speed of a routed entity based on the transfer time and the physical length of the route on the animation. In our model, some of these routes are very short (from Celli to Cell 2) and some are very long (from Cell 2 to Cell 1). Thus, the entities being transferred will be moving at quite different speeds relative to each other. If this was important, we could request or collect better transfer times and incorporate these into our model. The easiest way would be to delete the variable Transfer Time and assign these new transfer times to a new attribute in the Sequences module. If your transfer times and drawing are accurate, the entities should now all move at the same speed. The only potential problem is that a part may enter the main aisle at the same time another part is passing by, resulting in one of the parts overlapping the other until their paths diverge. This is a more difficult problem to resolve, and it may not be worth the bother. If the only concern is for purposes of presentation, watch the animation and find a long period of time when this does not occur-show only this period of time during your presentation. The alternative is to use material-handling constructs, which we'll do in Chapter 8.

After adding a little annotation, the final animation should look something like Figure 7-6 (this is a view of the system at time 541.28). At this point in your animation, you might want to check to see if your model is running correctly or at least the way you intended it. We'll take up this topic in our next section.

7.1.6 Verification

Verification is the process of ensuring that the Arena model behaves in the way it was intended according to the modeling assumptions made. This is actually very easy compared to model validation, which is the process of ensuring that the model behaves the same as the real system. We'll discuss both of these aspects in more detail in Chapter 13. Here we'll only briefly introduce the topic of model verification.

Cell!

Cell 2

Order Release

Exit System

Cell 4

Cell 3

Figure 7·6. The Animated Small Manufacturing System: Mode/7·1

INTERMEDIATE MODELING AND STEADy-STATE STATISTICAL ANALYSIS 309

Verification deals with both obvious problems as well as the not-so-obvious. For example, if you had tried to run your model and Arena returned an error message that indicated that you had neglected to define one of the machine resources at Cell 3, the model is obviously not working the way you intended. Actually, we would normally call this the process of debugging! However, since we assume that you'll not make these kinds of errors (or, if you do, you can figure out yourself how to fix them), we'll deal with the not-so-obvious problems.

Verification is fairly easy when you're developing small classroom-size problems like the ones in this book. When you start developing more realistically sized models, you'll find that it's a much more difficult process, and you may never be 100% sure on very, very large models.

One easy verification method is to allow only a single entity to enter the system and follow that entity to be sure that the model logic and data are correct. You could use the Step button (~I) found on the Standard toolbar to control the model execution and step the entity through the system. For this model, you could set the Max Arrivals field in the Create module to 1. To control the entity type, you could replace the discrete distribution in the Assign module that determines the part type with the specific part type you want to observe. This would allow you to check each of the part sequences. Another common method is to replace some or all model data with constants. Using deterministic data allows you to predict system behavior exactly.

If you're going to use your model to make real decisions, you should also check to see how the model behaves under extreme conditions. For example, introduce only one part type or increase/decrease the part interarrival or service times. If your model is going to have problems, they will most likely show up during these kinds of stressed-out runs. Also, try to make effective use of your animation-it can often reveal problems when viewed by a person familiar with the operation of the system being modeled.

It's often a good idea, and a good test, to make long runs for different data and observe the summary results for potential problems. One skill that can be of great use during this process is that of performance estimation. A long, long time ago, before the invention of calculators and certainly before personal computers, engineers carried around sticks called slide rules (often protected by leather cases and proudly attached to engineers' belts). These were used to calculate the answers to the complicated problems given to them by their professors or bosses. These devices achieved this magic feat by adding or subtracting logs (not tree logs, but logarithms). Although they worked rather well (but not as well, as easily, or as fast as a calculator), they only returned (actually you read it off the stick) a sequence of digits, say 3642. It was up to the engineer to figure out where to put the decimal point. Therefore, engineers had to become very good at developing rough estimates of the answers to problems before they used the sticks. For example, if the engineer estimated the answer to be about 400, then the answer was 364.2. However, if the estimate was about 900, there was a problem. At that point, it was necessary to determine if the problem was in the estimation process or the slide-rule process. We suspect that at this point you're asking two questions: 1) why the long irrelevant story, and 2) did we really ever use such sticks? Well, the answers are: I) to illustrate a point, and 2) all of the authors did!

310 CHAPTER 7

So how do you use this great performance-estimation skill? You define a set of conditions for the simulation, estimate what will result, make a run, and look at the summary data to see if you were correct. If you were, feel good and try it for a different set of conditions. If you weren't correct (or at least in the ballpark), find out why not. It may be due to bad estimation, a lack of understanding of the system, or a faulty model. Sometimes not-so-obvious (but real) results are created by surprising interactions in the model. In general, you should thoroughly exercise your simulation models and be comfortable with the results before you use them to make decisions. And it's best to do this early in your project-and often.

Back in Section 2.8 when we mentioned verification, we suggested that you verify your code. Your response could have been, "What code?" Well, there is code, and we'll show you a little in Figures 7-7 and 7-8. But you still might be asking, "Why code?" To explain the reason for this code requires a little bit of background (yes, another history lesson). The formation of Systems Modeling (the company that originally developed Arena) and the initial release of the simulation language SIMAN® (on which Arena is based and to which you can gain access through Arena) occurred in 1982. Personal computers were just starting to hit the market, and SIMAN was designed to run on these new types of machines. In fact, SIMAN required only 64 Kbytes of memory, which was a lot in those days. There was no animation capability (Cinema, the accompanying animation tool, was released in 1985), and you created models using a text editor, just like using a programming language. A complete model required the development of two files, a model file and an experiment file. The model file, often referred to as the .mod file, contained the model logic. The experiment file, referred to as the .exp file, defined the experimental conditions. It required the user to list all stations, attributes, resources, etc., that were used in the model. The creation of these files required that the user follow a rather exacting syntax for each type of model or experiment construct. In other words, you had to start certain statements only in column 10; you had to follow certain entries with the required comma, semicolon, or colon; all resources and attributes were referenced only by number; and only a limited set of key words could be used. (Many of your professors, or maybe their professors, learned simulation this way.)

Since 1982, SIMAN has been enhanced continually and still provides the basis for an Arena simulation model. When you run an Arena simulation, Arena examines each option you selected in your modules and the data that you supplied and then creates SIMAN .mod and .exp files. These are the files that are used to run your simulations. The implication is that all the modules found in the Basic Process, Advanced Process, and Advanced Transfer panels are based on the constructs found in the SIMAN simulation language. The idea is to provide a simulation tool (Arena) that is easy to use, yet one that is still based on the powerful and flexible SIMAN language. So you see, it is still possible, and sometimes even desirable, to look at the SIMAN code. In fact, you can even write out and edit these files. However, it is only possible to go down (from Arena to SIMAN code) and not possible to go back up (from SIMAN code to Arena). As you

. become more proficient with Arena, you might occasionally want to look at this code to be assured that the model is doing exactly what you want it to-verification.

INTERMEDIATE MODELING AND STEADy-STATE STATISTICAL ANALYSIS 311

Model statements for module: Process 3

;
10$ ASSIGN:
138$ QUEUE,
137$ SEIZE,
136$ DELAY:
135$ RELEASE:
183$ ASSIGN: Cell 3 Process.NumberIn=Cell 3 Process.NumberIn + 1:

Cell 3 Process.WIP=Cell 3 Process.WIP+1; Cell 3 Process.Queue;

2,VA:

SELECT(Cell 3 Machines,CYC,Machine Index),1:NEXT(136$); Process Time * Factor { Machine Index ), ,VAi

Cell 3 Machines(Machine Index) ,1;

Cell 3 Process.NumberOut=Cell 3 Process.NumberOut + 1:

Cell 3 Process.WIP=Cell 3 Process.WIP-1:NEXT(11$);

Figure 7-7. SIMAN Model File for the Process Module

We should point out that you can still create your models in Arena using the base SIMAN language. If you build a model using only the modules found in the Blocks and Elements panels, you are basically using the SIMAN language. Recall that we used several modules from the Blocks panel when we constructed our inventory model in Section 5.7.

You can view the SIMAN code for our small manufacturing model by using the Run> SIMAN > View option. Selecting this option will generate both files, each in a separate window. Figure 7-7 shows a small portion of the .mod file, the code written out for the Process module used at Cell 3. The SIMAN language is rather descriptive, so it is possible to follow the general logic. An entity that arrives at this module increments some internal counters, enters a queue, Cell 3 Process. Queue, waits to seize a resource from the set Cell 3 Machines, delays for the process time (adjusted by our factor), releases the resource, decrements the internal counters, and exits the module.

Figure 7-8 shows a portion of the .exp file that defines our three attributes and the queues and resources used in our model.

We won't go into a detailed explanation of the statements in these files; the purpose of this exercise is merely to make you aware of their existence. For a more comprehensive explanation, refer to Pegden, Shannon, and Sadowski (1995).

ATTRIBUTES: Machine Index:

Part Index:

QUEUES:

Process Time;

CellI Process.Queue,FIFO, ,AUTOSTATS(Yes, ,):

Cell 2 Process.Queue,FIFO"AUTOSTATS(Yes, ,):

Cell 3 Process.Queue,FIFO, ,AUTOSTATS(Yes, ,):

Cell 4 Process.Queue,FIFO"AUTOSTATS(Yes,,); Cell 2 Machine,Capacity(l), "COST(O.O,O.O,O.O),

CATEGORY (Resources) "AUTOSTATS(Yes, ,):

CellI Machine,Capacity(l), "COST(O.O,O.O,O.O), CATEGORY (Resources) "AUTOSTATS(Yes, ,):

Cell 3 Old,Capacity(l)" ,COST(O.O,O.O,O.O), CATEGORY (Resources) , ,AUTOSTATS(Yes,,):

Cell 3 New,Capacity(l)" ,COST(O.O,O.O,O.O), CATEGORY (Resources) "AUTOSTATS(Yes, ,):

Cell 4 Machine,Capacity(l), "COST(O.O,O.O,O.O), CATEGORY (Resources) "AUTOSTATS(Yes, ,);

RESOURCES:

Figure 7-8. SIMAN Experiment File for the Process Module

312 CHAPTER 7

If you're familiar with SIMAN or you would just like to learn more about how this process works, we might suggest that you place a few modules, enter some data, and write out the .mod and .exp files. Then edit the modules by selecting different options and write out the new files. Look for the differences between the two .mod files. Doing so should give you a fair amount of insight as to how this process works.

7.2 Statistical Analysis of Output from Steady-State Simulations

In Chapter 6, we described the difference between terminating and steady-state simulations and indicated how you can use Arena's reports, PAN, and the Output Analyzer to do statistical analyses in the terminating case. In this section, we'll show you how to do statistical inference on steady-state simulations.

Before proceeding, we should encourage you to be sure that a steady-state simulation is appropriate for what you want to do. Often people simply assume that a long-run, steady-state simulation is the thing to do, which might in fact be true. But if the starting and stopping conditions are part of the essence of your model, a terminating analysis is probably more appropriate; if so, you should just proceed as in Chapter 6. The reason for avoiding steady-state simulation is that, as you're about to see, it's a lot harder to carry out anything approaching a valid statistical analysis than in the terminating case if you want anything beyond Arena's standard 95% confidence intervals on mean performance measures; so if you don't need to get into this, you shouldn't.

One more caution before we wade into this material: As you can imagine, the run lengths for steady-state simulations need to be pretty long. Because of this, there are more opportunities for Arena to sequence its internal operations a little differently, causing the random-number stream (see Chapter 12) to be used differently. This doesn't make your model in any sense "wrong" or inaccurate, but it can affect the numerical results, especially for models that have a lot of statistical variability inherent in them. So if you follow along on your computer and run our models, there's a chance that you're going to get numerical answers that differ from what we report here. Don't panic over this since it is, in a sense, to be expected. If anything, this just amplifies the need for some kind of statistical analysis of simulation output data since variability can come not only from "nature" in the model's properties, but also from internal computational issues.

In Section 7.2.1, we'll discuss determination of model warm-up and run length.

Section 7.2.2 describes the truncated-replication strategy for analysis, which is by far the simplest approach (and, in some ways, the best). A different approach called batching is described in Section 7.2.3. A brief summary is given in Section 7.2.4, and Section 7.2.5 mentions some other issues in steady-state analysis.

7.2.1 Warm-Up and Run Length

As you might have noticed, our examples have been characterized by a model that's initially in an empty-and-idle state. This means that the model starts out empty of entities and all resources are idle. In a terminating system, this might be the way things actually start out, so everything is fine. But in a steady-state simulation, initial conditions aren't supposed to matter, and the run is supposed to go on forever.

INTERMEDIATE MODELING AND STEADy-STATE STATISTICAL ANALYSIS 313

Actually, though, even in a steady-state simulation, you have to initialize and stop the run somehow. And since you're doing a simulation in the first place, it's a pretty safe bet that you don't know much about the "typical" system state in steady state or how "long" a run is long enough. So you're probably going to wind up initializing in a state that's pretty weird from the viewpoint of steady state and just trying some (long) run lengths. If you're initializing empty and idle in a simulation where things eventually become congested, your output data for some period of time after initialization will tend to understate the eventual congestion; that is, your data will be biased low.

To remedy this, you might try to initialize in a state that's "better" than empty and idle.

This would mean placing, at time 0, some number of entities around the model and starting things out that way. While it's possible to do this in your model, it's pretty inconvenient. More problematic is that you'd generally have no idea how many entities to place around at time 0; this is, after all, one of the questions the simulation is supposed to answer.

Another way of dealing with initialization bias is just to run the model for so long that whatever bias may have been there at the beginning is overwhelmed by the amount of later data. This can work in some models if the biasing effects of the initial conditions wear off quickly.

However, what people usually do is initialize empty and idle, realizing that this is unrepresentative of steady state, but then let the model warm up for a while until it appears that the effects of the artificial initial conditions have worn off. At that time, you can clear the statistical accumulators (but not the system state) and start afresh, gathering statistics for analysis from then on. In effect, this is initializing in a state other than empty and idle, but you let the model decide how many entities to have around when you start (afresh) to watch your performance measures. The run length should still be long, but maybe not as long as you'd need to overwhelm the initial bias by sheer arithmetic.

It's very easy to specify an initial Warm-up Period in Arena. Just open the Run > Setup> Replication Parameters dialog box and fill in a value (be sure to verify the Time Units). Every replication of your model still runs starting as it did before, but at the end of the Warm-up Period, all statistical accumulators are cleared and your reports (and any Outputs-type saved data from the Statistic module of results across the replications) reflect only what happened after the warm-up period ends. In this way, you can "decontaminate" your data from the biasing initial conditions.

The hard part is knowing how long the warm-up period should be. Probably the most practical idea is just to make some plots of key outputs from within a run, and eyeball when they appear to stabilize. To illustrate this, we took Model 7-1 from Section 7.1 and made the following modifications, calling the result Model 7-2:

• To establish a single overall output performance measure, we made an entry in the Statistic module to compute the total work in process (WIP) of all three parts combined. The Name and Report Label are both Total WIP, and the Type is Time-Persistent. To enter the Expression we want to track over time, we right-clicked in that field and selected Build Expression, clicked down the tree via Basic Process Variables -7 Entity -7 Number in Process, selected Part 1 as the Entity Type, and got Enti tiesWIP (Part 1) for the Current

314 CHAPTER 7

Figure 7-9. The Completed Total WIP Entry in the Statistic Module

Expression, which is part of what we want. Typing a + after that and selecting Part 2 for the Entity Type, another +, then Type 3 for the Entity Type finally gives us EntitiesWIP(Part 1) + EntitiesWIP(Part 2) + EntitiesWIP(Part 3), which is the total WIP. This will create an entry labeled Total WIP in the Category Overview (under User Specified) giving the time-average and maximum of the total number of parts in process.

.. However, we also want to track the "history" of the Total WIP curve during the simulation rather than just getting the post-run summary statistics since we want to "eyeball" when this curve appears to have stabilized in order to specify a reasonable Warm-up Period. You could place an animated Plot in your model, as we've done before; however, this will disappear as soon as you hit the End button, and will also be subject to perhaps-large variation, clouding your judgment about the stabilization point. We need to save the curve history and make more permanent plots, and we also need to plot the curve for several replications to mitigate the noise problem. To do so we made a file-name entry, Total WIP History. dat, in the Output File field of the Total WIP entry in the Statistic module, which will save the required information into that file, which can be read into the Arena Output Analyzer (see Section 6.4) and plotted after the run is complete. Depending on how long your run is and how many replications you want, this file can get pretty big since you're asking it to hold a lot of detailed, within-run information (our run, described below, resulted in a file of about 176 KB). The complete entry in the Statistic module is in Figure 7-9.

.. Since we aren't interested in animation at this point, we accessed the Run> Run Control option and checked Batch Run (No Animation) to speed things up. We also cleared all the boxes under Statistics Collection in Run> Setup> Project Parameters to increase speed further. To get a feel for the variation, we specified 10 for the Number of Replications in Run > Setup > Replication Parameters; since we're now interested in long-run steady-state performance, we increased the Replication Length from 32 Hours (1.33 Days) to 5 Days. We freely admit that these are pretty much arbitrary settings, and we settled on them after some trial and error. For the record, it took less than two seconds to run all this on a 2.13 GHz notebook computer.

To make a plot ofWIP vs. time in the Output Analyzer (see Section 6.4 for basics on the Output Analyzer), we created a new data group and added the file Total WIP History. dat to it. We then selected Plot (~ or Graph> Plot), Added the .dat file (selecting All in the Replications field of the Data File dialog box), typed in a Title, and changed the Axis Labels; the resulting dialog boxes are shown in Figure 7-10.

INTERMEDIATE MODELING AND STEADy-STATE STATISTICAL ANALYSIS 315

Figure 7-10. The Output Analyzer's Plot Dialog Box

Figure 7-11 shows the resulting plot of WIP across the simulations, with the curves for all ten replications superimposed. From this plot, it seems clear that as far as the WIP output is concerned, the run length of five days (7,200 minutes, in the Base Time Units used by the plot) is enough for the model to have settled out, and also it seems fairly clear that the model isn't "exploding" with entities, as would be the case if processing couldn't keep up with arrivals. As for a Warm-up Period, the effect of the empty-and-idle initial conditions for the first several hundred minutes on WIP is evident at the left end of each curve and is reasonably consistent across replications, but it looles like things settle down after about 2,000 minutes; rounding up a little to be conservative, we'll select 2 Days (2,880 minutes) as our Warm-up Period.

W1PWarmup

WIP

Figure 7-11. Within-Run WIP Plots for Model 7-2

316 CHAPTER 7

If we'd made the runs (and plot) for a longer time period, or if the model warm-up occurred more quickly, it could be difficult to see the appropriate Warm-up Period on the left end of the curves. Ifso, you could "zoom in" on just the first part of each replication via the "Display Time from ... to ... " area in the Plot dialog box.

In models where you're interested in multiple output performance measures, you'd need to make a plot like Figure 7-11 for each output. There could be disagreement between the output measures on warm-up rate, in which case, the safe move is to take the maximum of the individual warm-ups as the one to use overall.

7.2.2 Truncated Replications

If you can identify a warm-up period, and if this period is reasonably short relative to the run lengths you plan to make, then things are fairly simple for steady-state statistical analysis: just make lID (that's independent and identically distributed, in case you've forgotten) replications, as was done for terminating simulations in Chapter 6, except that you also specify a Warm-up Peijod for each replication in Run > Setup > Replication Parameters. With these appropriate warm-up and run-length values specified, you just proceed to make independent teplications and carry out your statistical analyses as in Chapter 6 with warmed-up independent replications so that you're computing steadystate rather than terminating quantities. Life is good.

This idea applies to comparing or selecting alternatives (Sections 6.4 and 6.5) and optimum seeking (Section 6.6), as well as to single-system analysis. However, there could be different warm-up and run-length periods needed for different alternatives; you don't really have the opportunity to control this across the different alternatives that OptQuest might decide to consider, so you should probably run your model ahead of time for a range of possible scenarios and specify the warm-up to be (at least) the maximum of those you experienced.

For the five-day replications of Model 7-2, we entered in Run> Setup> Replication Parameters the 2-Day Warm-up Period we determined in Section 7.3.1 and again asked for ten replications. Since we no longer need to save the within-run history, we deleted the Output File entry in the Statistic module. We called the resulting model Model 7-3 (note our practice of saving successive changes to models under different names so we don't cover up our tracks in case we need to backtrack at some point). The result was a 95% confidence interval for expected average WIP of 16.39 ± 6.51; from Model 7-2 with no warm-up, we got 15.35 ± 4.42, illustrating the downward-biasing effect of the initial low-congestion period. To see this difference more clearly, since these confidence intervals are pretty wide, we made 100 replications (rather than ten) and got 15.45 ± 1.21 for Model 7-3 and 14.42 ± 0.88 for Model 7-2. Note that the confidence intervals, based on the same number of replications, are wider for Model 7-3 than for Model 7-2; this is because each replication of Model 7-3 uses data from only the last three days whereas each replication of Model 7-2 uses the full five days, giving it lower variability (at the price of harmful initialization bias, which we feel is a desirable tradeoff in favor of truncating as in Model 7-3).

If more precision is desired in the form of narrower confidence intervals, you could achieve it by simulating "some more." Now, however, you have a choice as to whether to

INTERMEDIATE MODELING AND STEADy-STATE STATISTICAL ANALYSIS 317

make more replications with this run length and warm-up or keep the same number of replications and extend the run length. (Presumably the original warm-up would still be adequate.) It's probably simplest to stick with the same run length and just make more replications, which is the same thing as increasing the statistical sample size. There is something to be said, though, for extending the run lengths and keeping the same number of replications; the increased precision with this strategy comes not from increasing the "sample size" (statistically, degrees of freedom) but rather from decreasing the variability of each within-run average since it's being taken over a longer run. Furthermore, by making the runs longer, you're even more sure that you're running long enough to "get to" steady state.

If you can identify an appropriate run length and warm-up period for your model, and if the warm-up period is not too long, then the truncated-replication strategy is quite appealing. It's fairly simple, relative to other steady-state analysis strategies, and gives you truly independent observations (the results from each truncated replication), which is a big advantage in doing statistical analysis. This goes not only for simply making confidence intervals as we've done here, but also for comparing alternatives as well as other statistical goals.

7.2.3 Batching in a Single Run

Some models take a long time to warm-up to steady state, and since each replication would have to pass through this long warm-up phase, the truncated-replication strategy of Section 7.2.2 can become inefficient. In this case, it might be better to malce just one really long run and thus have to "pay" the warm-up only once. We modified Model 7-3 to make a single replication oflength 50 days including a (single) warm-up of two days (we call this Model 7-4); this is the same simulation "effort" as making the ten replications of length five days each, and it took about the same amount of computer time. Since we want to plot the within-run WIP data, we reinstated a file name entry in the Output File field in the Statistic data module, calling it Total WIP History One Long Run. dat (since it's a long run, we thought it deserved a long file name too). Figure 7-12 plots Total WIP across this run. (For now, ignore the thick vertical bars we drew in the plot.)

WlP One long Run

WIP

! 1'~\' ~

~ I , I

III i i

I !

30

25

15

20

10

5

0+-~---r--+-~---r--+--4---r--r--+--1---r--+--~ o

20

40 l1me (Minutes)

50

70

Figure 7·12. Total WIP Over a Single Run of 50 Days

318 CHAPTER 7

The difficulty now is that we have only one "replication" on each performance measure from which to do our analysis, and it's not clear how we're going to compute a variance estimate, which is the basic quantity needed to do statistical inference. Viewing each individual observation or time-persistent value within a run as a separate "data point" would allow us to do the arithmetic to compute a within-run sample variance, but doing so is extremely dangerous since the possibly heavy correlation (see Section C.2.4 in Appendix C) of points within a run will cause this estimate to be severely biased with respect to the true variance of an observation. Somehow, we need to take this correlation into account or else "manufacture" some kind of "independent observations" from this single long run in order to get a decent estimate of the variance.

There are several different ways to proceed with statistical analysis from a single long run. One relatively simple idea (that also appears to work about as well as other more complicated methods) is to try to manufacture almost-uncorrelated "observations" by breaking the run into a few large batches of many individual points, compute the averages of the points within each batch, and treat them as the basic IID observations on which to do statistical analysis (starting with variance estimation). These batch means then take the place of the means from within the multiple replications in the truncatedreplication approach-we've replaced the replications by batches. In the WIP plot of Figure 7-12, the thick vertical dividing lines illustrate the idea; we'd take the timeaverage WIP level between the lines as the basic "data" for statistical analysis. In order to obtain an unbiased variance estimate, we need the batch means to be nearly uncorrelated, which is why we need to make the batches big; there will still be some heavy correlation of individual points near either side of a batch boundary, but these points will be a small minority within their own large batches, which we hope will render the batch means nearly uncorrelated. In any case, Arena will try to make the batches big enough to look uncorrelated, and will let you know if your run was too short for it to manufacture batch means that "look" (in the sense of a statistical test) nearly uncorrelated, in which case, you'd have to make your run longer.

As we just hinted, Arena automatically attempts to compute 95% confidence intervals via batch means for the means of all output statistics and gives you the results in the Half Width column next to the Average column in the report for each replication. If you're making just one replication, as we've been discussing, this is your batch-means confidence interval. On the other hand, if you've made several replications, you see this in the Category by Replication report for each replication; the Half Width in the Category Overview report is across replications, as discussedin Sections 6.3 and 7.2.2. We say "attempts to compute" since internal checks are done to see if your replication is long enough to produce enough data for a valid batch-means confidence interval on an output statistic; if not, you get only a message to this effect, without a half-width value (on the theory that a wrong answer is worse than no answer at all) for this statistic. If you've specified a Warm-up Period, data collected during this period are not used in the confidence-interval calculation. To understand how this procedure works, think of Arena as batching "on the fly" (that is, observing the output data during your run and throwing them into batches as your simulation goes along).

INTERMEDIATE MODELING AND STEADy-STATE STATISTICAL ANALYSIS 319

So what does "enough data" mean? There are two levels of checks to be passed, the first of which isjust to get started. For a Tally statistic, Arena demands a minimum of320 observations. For a time-persistent statistic, you must have had at least five units of simulated time during which there were at least 320 changes in the level of the discretechange variable. Admittedly, these are somewhat arbitrary values and conditions, but we need to get started somehow, and the more definitive statistical test, which must also be passed, is done at the end of this procedure. If your run is not long enough to meet this first check, Arena reports "(Insufficient)" in the Half-Width column for this variable. Just making your replication longer should eventually produce enough data to meet these getting-started requirements.

If you have enough data to get started, Arena then begins batching by forming 20 batch means for each Tally and time-persistent statistic. For Tally statistics, each batch mean will be the average of 16 consecutive observations; for time-persistent statistics, each will be the time average over 0.25 base time unit. As your run progresses, you will eventually accumulate enough data for another batch of these same "sizes," which is then formed as batch number 21. If you continue your simulation, you'll get batches 22, 23, and so on, until you reach 40 batches. At this point, Arena will re-batch these 40 batches by averaging batches one and two into a new (and bigger) batch one, batches three and four into a new (bigger) batch two, etc., so that you'll then be back to 20 batches, but each will be twice as "big." As your simulation proceeds, Arena will continue to form new batches (21, 22, and so on), each of this larger size, until again 40 batches are formed, when re-batching back down to 20 is once again performed. Thus, when you're done, you'll have between 20 and 39 complete batches of some size. Unless you're really lucky, you'll also have some data left over at the end in a partial batch that won't be used in the confidence-interval computation. The reason for this re-batching into larger batches stems from an analysis by Schmeiser (1982), who demonstrated that there's no advantage to the other option, of continuing to gather more and more batches of a fixed size to reduce the half width, since the larger batches will have inherently lower variance, compensating for having fewer of them. On the other hand, having batches that are too small, even if you have a lot of them, is dangerous since they're more likely to produce correlated batch means, and thus an invalid confidence interval.

The final check is to see if the batches are big enough to support the assumption of independence between the batch means. Arena tests for this using a statistical hypothesis test due to Fishman (1978). If this test is passed, you'll get the Half Width for this output variable in your report. If not, you'll get "(Correlated)" indicating that your process is evidently too heavily auto correlated for your run length to produce nearly uncorrelated batches; again, lengthening your run should resolve this problem, though-depending on the situation, you may have to lengthen it a lot.

Returning to Model 7-4, after making the one 50-day run and deleting the first two days as a warm-up period, Arena produced in the Category Overview report a 95% batch-means confidence interval of 13.6394 ± 1.38366 on expected average WIP. The reason this batch-means confidence interval shows up here in the Category Overview report is that we've only made a single replication.

320 CHAPTER 7

In most cases, these automatic batch-means confidence intervals will be enough for you to understand how precise your averages are, and they're certainly easy (requiring no work at all on your part). But there are a few considerations to bear in mind. First, don't forget that these are relevant only for steady-state simulations; if you have a terminating simulation (Chapter 6), you should be making independent replications and doing your analysis on them, in which case Arena reports cross-replication confidence-interval halfwidths as in Chapter 6, and does no within-run batching. Secondly, you still ought to take a look at the Warm-up Period for your model, as in Section 7.2.1, since the automatic batch-means confidence intervals don't do anything to correct for initialization bias if you don't specify a Warm-up Period. Finally, you can check the value of the automatic batch-means half width as it is computed during your run, via the Arena variables THALF(Tally ID) for Tally statistics and DHALF(Dstat ID) for time-persistent (a.k.a. Dstat) statistics; one reason to be interested in this is that you could use one of these for a critical output measure in the Terminating Condition field of your Simulate module to run your model long enough for it to become small (i.e., precise) enough to suit you; see Section 12.5 for more on this and related ideas.

We should mention that, if you really want, you can decide on your own batch sizes, compute and save the batch means, then use them in statistical procedures like confidence-interval formation and comparison of two alternatives (see Sections 6.3 and 6.4). Briefly, the way this works is that you save your within-run history of observations just as we did to make our warm-up-determination plots, read this file into the Output Analyzer, and use its Analyze> Batch/Truncate Obs 'ns capability to do the batching and averaging, saving the batch means that you then treat as we did the cross-replication means in Sections 6.3 and 6.4. When batching, the Output Analyzer performs the same statistical test for uncorrelated batches, and will let you know if the batch size you selected is too small. Some reasons to go through this are if you want something other than a 95% confidence level, if you want to make the most efficient use of your data and minimize waste at the end, or if you want to do a statistical comparison between two alternatives based on their steady-state performance. However, it's certainly a lot more work.

7.2.4 What To Do?

We've shown you how to attack the same problem by a couple of different methods and hinted that there are a lot more methods out there. So which one should you use? As usual, the answer isn't obvious (we warned you that steady-state output analysis is difficult). Sometimes there are tradeoffs between scientific results and conceptual (and practical) simplicity, and that's certainly the case here.

In our opinion (and we don't want to peddle this as anything more than opinion), we might suggest the following list, in decreasing order of appeal:

1. Try to get out of doing a steady-state simulation altogether by convincing yourself (or your patron) that the appropriate modeling assumptions really entail specific starting and stopping conditions. Go to Chapter 6 (and don't come back here).

2. If your model is such that the warm-up is relatively short, probably the simplest and most direct approach is truncated replication. This has obvious intuitive

INTERMEDIATE MODELING AND STEADy-STATE STATISTICAL ANALYSIS 321

appeal, is easy to do (once you've made some plots and identified a reasonable Warm-up Period), and basically winds up proceeding just like statistical analysis for terminating simulations, except for the warm-up. It also allows you to take advantage of the more sophisticated analysis capabilities in PAN and OptQuest.

3. If you find that your model warms up slowly, then you might consider batch means, with a single warm-up at the beginning of your single really long run. You could either accept Arena's automatic batch-means confidence intervals or handcraft your own. You cannot use the statistical methods in PAN or OptQuest with the batch-means approach, however (part of the reason this is last in our preference list).

7.2.5 Other Methods and Goals for Steady~State Statistical Analysis

We've described two different strategies (truncated replications and batch means) for doing steady-state statistical analysis; both of these methods are available in Arena. This has been an area that's received a lot of attention among researchers, so there are a number of other strategies for this difficult problem: econometric time-series modeling, spectral analysis from electrical engineering, regenerative models from stochastic processes, standardized time series, as well as variations on batch means like separating or weighting the batches. If you're interested in exploring these ideas, you might consult Chapter 9 of Law and Kelton (2000), a survey paper like Sargent, Kang, and Goldsman (1992), or peruse a recent volume of the annual Proceedings of the Winter Simulation Conference, where there are usually tutorials, surveys, and papers covering the latest developments on these subjects.

7.3 Summary and Forecast

Now you should have a very good set of skills for carrying out fairly detailed modeling and have an understanding of (and know what to do about) issues like verification and steady-state statistical analysis. In the following chapter, we'll expand on this to show you how to model complicated and realistic material-handling operations. In the chapters beyond, you'll drill down deeper into Arena's modeling and analysis capabilities to exploit its powerful and flexible hierarchical structure.

7.4 Exercises

7-1 In Exercise 4-7, is the run long enough to generate a batch-means-based confidence interval for the steady-state expected average cycle time? Why or why not?

7-2 Modify your solution for Exercise 5-2 to include transfer times between part arrival and the first machine, between machines, and between the last Machine 1 and the system exit. Assume all part transfer times are UNIF(l.S,3.2). Animate your model to show part transfers with the part entity picture changing when it departs from Machine 2. Run for 20,000 minutes. To the extent possible, indicate the batch-means-based confidence intervals on expected steady-state performance measures from this run.

7-3 Using the model from Exercise 7-2, change the processing time for the second pass on Machine 1 to TRIA(6.7, 9.1, 13.6) using Sequences to control the flow of parts

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy