SIMULATION DIGEST VOL. 2, NUM. 1

SIMULATION DIGEST VOL. 2, NUM. 1

Post by SIMULATION MODELING & ANALYS » Wed, 02 May 1990 04:19:00



Volume: 2, Issue: 1, Mon May  2 23:18:55 EDT 1988

+----------------+
| TODAY'S TOPICS |
+----------------+

(1) Event Handling
(2) Simulating Multi-Processor Systems
(3) The Viability of Distributed Simulation

----------------------------------------------------------------------------

Date:     Sun, 1 May 88 1:35:15 EDT


Subject:  Simulation digest

Re: time stepped simulations. I've been using linked lists to sort
events but apparently using heaps ala William's Heapsort is faster.
Any comments? Anyone have Fortran 77 code for handling events?

Fred Bunn, Ballistic Research Lab, APG, MD (301) 278-6648.

----------------------------------------------------------------------------

Date: Sun, 1 May 88 15:15:05 PDT

To: uflorida!simulation
Subject: Re: SIMULATION DIGEST VOL. 1, NUM. 10
Newsgroups: comp.simulation
Organization: Lawrence Livermore National Laboratory

Quote:>I am looking for references on simulating a multiprocessor running some
>parallel code.  I am interested in studying the code speedup versus
>the number of processors.  I am currently using a little discrete event
>simulation program called "smpl" written by M. H. MacDougall.  My problem
>is that my simulation program is getting complex due to the synchronizations
>required between processors.  Is there a standard approach to simulating
>parallel algorithms on a (simulated) multiprocessor?

Simulating a multiprocessor, especially if the multiprocessor is a highly
coupled shared system and large number of processors are to be simulated,
is an onerous task.  The amount of cycles needed usually precludes using
a general purpose simulation language.  In the Cerberus multiprocessor
simulator, which simulates a scalable number of RISC (Cray like) processors
connected to a same number of memory modules via a packet switched network,
we resorted to writing the simulator directly in a parallel extension of C
in order to get a simulation program with adequate performance.  The cycles
required by the program is still quite excessive, about 1000 host machine
instructions are require per simulated cpu instruction for those involving
shared memory references.

There is very little in the literature in the way of tricks used for the
creation of efficient simulators, ie how to best write a simulator for a
given system in an efficient way.  Each simulation task presents its own
special problems and opportunities.  Most of the literature describes event
driven general purpose simulation languages which are useful for quick
prototyping, but show their major weakness when you start making production
runs with the resulting simulator.

It is interesting that the parallel extension of C used to write the simulator
is the same extension of C that is compiled and run on the simulator, the
simulator could probably simulate itself, but we aren't that patient.  The
amount of horsepower that is available for simulation tasks using the new
micro based shared memory multiprocessors is quite startling.  We get the
performance equivalent of several Cray machines using the current generation
of multiprocessors.

----------------------------------------------------------------------------

Date: Mon, 2 May 88 08:53:57 EDT

Posted-From: The MITRE Corp., Bedford, MA

   Distributed Simulation is a waste of time.period.

   I have been working with communication network simulations
for about 6 years. Most of what follow has NO theoretical basis
at all and is entirely pragmatic.

 Simulations, as I have used them are not finished, deliverable
products but continually evolving research tools. There is
a constant feed back from simulation results to network design
and back again. Consequently it is vital that we *believe* the
results. If we do an run (say it takes an hour cpu) time and
something funny happens at a node, then we have to figure out
whether its a coding problem, an implementation problem, or a
design problem. We turn on the various detailed tracing facilities
we have, with great foresight built it and run it again until
we are absolutely sure we understand what happened.

 With any of the various distributed simulation approaches this
ability goes away. It  is almost impossible to guarantee that
the messages will always be transmitted, arrive and be processed
in the same order every time given the same scenario and input
parameters. (Please hold the flames on sophiticated tracing,
distributed debugging etc. I'm talking pragmatics here - not
the lates ACM published academic vapor).

  In order to get any type of certainty that your simulation does
what you hope its doing it is almost a necessity that you build
a non-distributed version of it - a simulation of your simulation.

 Now, I claim the the largest number of threads of control (
cpus, tasks, whatever you may call them ) you could possibly
have any hope to follow and understand (or purchase ) is
10. The best you could do in a perfect world is a speed up
factor of ten. But the world is imperfect- you probably couldn't
do better that 3, 5 at best. Hardly worth it in light
of the price paid (see above). Hell you could get that by
waiting a couple of years(you would have spent this time
building, testing and verfiying your distributed monster) and buying a
faster chip with a fancier compiler.

 So where does that leave us - Well those of us in the WORLD
who actually do this stuff stay with the tried and true methods
of the past. Those others , locked in pursuit of advanced degrees,
write pretty articles and applying their methods to toy problems.

DISCLAIMER: Any snide comments in the above are semi-intentional.
I don't want to tick anyone off - I'd just like to stimulate
some discussion.

Jerry Freedman, Jr      "Love is staying up all night

(617)271-4563            or a healthy *"

+--------------------------+
| END OF SIMULATION DIGEST |
+--------------------------+