SIMULATION DIGEST VOL. 1, NUM. 9

SIMULATION DIGEST VOL. 1, NUM. 9

Post by SIMULATION MODELING & ANALYS » Mon, 02 Apr 1990 19:41:00



Volume: 1, Issue: 9, Fri Apr 29 14:40:59 EDT 1988

+----------------+
| TODAY'S TOPICS |
+----------------+

(1) WELCOME to MODSIM subscribers
(2) Topics and Issues in Parallel Simulation

----------------------------------------------------------------------------

Date: Fri, 29 Apr 88 14:31:17 EDT


Subject: welcome

TO MODSIM GROUP:

Welcome -  MODSIM group. You have been placed on the subscriber list
for the digest "comp.simulation" NOTE --> If you get a USENET feed PLEASE
drop me a line and I will remove your address from the subscriber list
(you can use the local readnews or 'rn' facility to subscribe and
unsubscribe to the news group "comp.simulation").

Many thanks to Unni Warrier for having such a successful group. Hopefully,
now that we have a formal news group on USENET, we will get more
interaction in comp.simulation especially with all the new MODSIM'ers.
Unni's MODSIM mailing list had roughly 200 sites and addresses on it
which he sent me to attach to the current subscriber list (for those
who cannot get a USENET feed).

 ----------------------------------------------------------------------

 Dept. of Computer Science. UUCP: ...ihnp4!codas!uflorida!fish!fishwick
 Univ. of Florida.......... PHONE: (904)-335-8036
 Bldg. CSE, Room 301....... FACS is available
 Gainesville, FL 32611.....
 ----------------------------------------------------------------------

----------------------------------------------------------------------------

Date: Fri, 29 Apr 88 08:55:29 PDT


Subject: Article for inclusion in simulation list

>FORWARDED to COMP.SIMULATION


Subject: PDS paradigms

Hi folks,

    My name is Dave Wagner, and I am a graduate student at the University
of Washington.  I'm doing my Ph.D. research in parallel simulation.
I have been saving the mail in this group without taking
a very careful look at it until now.  So, at this point, I would like
to begin addressing a number of issues, some of which came up quite
some time ago.

    I will start by throwing in my two cents worth on the question of
semantics (philosophy?).  What do we mean by distributed simulation?
Two obvious criteria come to mind:

        (1) A simulation is distributed if it runs on a distributed
            system.
        (2) A simulation is distributed if timing information is
            spread out among a group of independent processes.

Obviously, a system can be built which satisies (2) but not (1);  i.e.
a Chandy-Misra-Bryant or Time Warp simulation with all processes running
on a multiprogrammed uniprocessor.  What is less obvious is that there are
schemes that have been proposed that satisfy (1) but not (2):  that is,
a scheme in which a number of computing nodes continuously run a protocol
that keeps them all synchronized in simulation (virtual) time.
For a reference, see Peacock, Manning, and Wong, "Distributed Simulation
Using a Network of Processors", in Computer Networks 3 (1979) pp. 44-56.

Likewise, what do we mean by parallel simulation?  A time-driven circuit
simulator in which all elements are evaluated at all time steps is
certainly one type (in fact, I think IBM has just such a thing).
Or, an event-driven simulator in which all events with the same time
component as the one at the head of the list are evaluated in parallel.
(This has been done by Andrew Wilson at Encore;  see his article in
Simulation, Feb. 1987 issue, pp. 72-78.)

My point is that there are a lot of things that can arguably fall under
the "parallel and distributed simulation" heading, so I am not going to
propose a black-and-white definition.  I will, however, tell you what
I consider to be an interesting class of "non-classical" simulation paradigms,
which happens to encompass both CMB and TW.  These are "Virtual Time" systems,
as defined by Jefferson:

    "Virtual time is a global, one-dimensional, temp*coordinate system
     imposed on a distributed computation;  it is used to measure compu-
     tational progress and to define synchronization.  It may or may not
     have a connection with real time."
     ("Virtual Time", ACM TOPLAS, 7/85, pp. 404-425).

The important thing about VTS's is that events APPEAR to happen in a
consistent order at all processes in the system, even though they may not
physically be executed in that order.
It is important to distinguish Virtual Time (a concept) from
Time Warp (one of its possible implementations).
CMB and TW are
both virtual time systems in that it appears to all processes
that simulation events are being executed in the (partial) order specified
by their simulation/virtual times, even though they may not be.  Every process
can have a different idea of "what time it is", and the simulation is still
correct as long as no event causality has been violated.

Speaking of causality, where does our friend Lamport fit into this?
Lamport showed, given a particular execution of a distributed system,
how a total order consistent with the causality of events in the system
could be constructed.  A VTS does the reverse:  given a total ordering
of events that obeys Lamport's Clock Conditions, it is able to obtain
a distributed execution of those events which is consistent with the
preassigned ordering.  (No discussions of relativity, please, as I am not
a physicist.)

In other words, I think that interesting simulation paradigms are those in
which processes execute ASYNCHRONOUSLY in real time, but SYNCHRONOUSLY in
virtual time.  This is not to say that they are any better than other
paradigms (in particular, the time-stepped, parallel evaluation paradigm
is probably superior for systems that display a lot of concurrency in
the virtual time sense, i.e. VLSI circuits).  Just more interesting, in
my opinion.  This class does not assume anything about the architecture
of the system, i.e. it may be a distributed system, a shared-memory
multiprocessor, or even a multiprogrammed uniprocessor.


Quote:> A ds algorithm is an operating system. It must
> schedule processes (earliest process first), route messages,
> allocate memory (earlier messages receive priority), etc.

Because of the semantics of VT,  it is sufficient, but not necessary, that
a VTS schedule processes using the earliest-first discipline.
But this brings up an interesting question, one of efficiency.
It is fairly easy to argue that a Time Warp implementation should schedule
earliest-VT-first, to minimize rollbacks.  However, it's not clear that
earliest-first scheduling provides any advantage in the CMB scheme.
If we ignore the issue of debugging for a moment, what is wrong with
allowing one process get arbitrarily far ahead of another?
(Provided, of course, that we don't let any process execute beyond
the end-of-simulation time, after which point it would just be wasting
CPU resources.)  It is intuitively appealing that the simulation would
be more efficient if it were kept "balanced" to some extent, but who
can furnish a proof of this?  Keep in mind also that scheduling carries
its own overhead, and thus we can't schedule infinitely often.
(If you were able to schedule earliest first using an infinitely
small granularity, you would be back to synchronous, time-stepped
simulation, which is obviously correct, and just as obviously uninteresting
in this context.)

I hope this posting will get some discussion going here again.
It's great to have all you people to bounce ideas off of;  I can't
believe I haven't done this before now.  More to follow soon...

                Dave Wagner
                University of Washington Comp Sci Department

                {ihnp4,harvard,tektronix}!uw-beaver!wagner

+--------------------------+
| END OF SIMULATION DIGEST |
+--------------------------+