How much does muti-user slow down performance?

How much does muti-user slow down performance?

Post by Sundial Servic » Sun, 31 Dec 1899 09:00:00




>With one user, access to all tables I use is near instantaneous, as I
>add users, the speed decreases linearly.  With two users performing the
>same operation simultaneously, the speed drops from instant to two-four
>seconds.  When ten users are simultaneously using the system, my test
>script goes from about two minutes to perform the specified operations
>to ten minutes.
>I have noticed that if User 1 accesses Table A (Open, Edit, Close) then
>User 1 is the next user to access it again, the operation is
>instantaneous.  However, if User 2 accesses Table A before User 1, the
>speed decreases again.  This sounds like a caching problem.  Another
>interesting note is that the slowdown occurs if I use Database Desktop
>when I have two stations open the same table simultaneously.  This would
>seem to point to the BDE (as opposed to my coding technique).
>What BDE parameters could be causing this?  Are there parameters on both
>the server and workstation that I should check?  The network and NT are
>configured properly to the best of my knowledge.  An NT consultant is
>currently analyzing the system.  I can't believe that Paradox is this
>slow.  Does Paradox 7 versus Paradox 5 effect performance?

When user-1 opens the table the second time, he can tell by looking at the
update counter that no one has changed the table therefore any cached copies
are good.  Otherwise the data must be retrieved again.

What you need to do is to look at your algorithm.  For example, keep the
tables open.  Do a batch of updates at a time, protected by one table-lock,
instead of relying on record-locking each time.  

In a file-server system especially, *communication* is often the bottleneck.  
In your scenario a lot of data is passing over the wire each time.  There are
a lot of directory-searches, file operations etc. being performed.

So, it -is- your coding technique.  I suspect that if you put your mind to it
you can get those eight minutes back.

/mr/

 
 
 

How much does muti-user slow down performance?

Post by Sundial Servic » Sun, 31 Dec 1899 09:00:00



>It sounds like you're saying that record locking in multi-user
>environments is not advisable?  The following code is common for my app:
>Table.Open
>Table.FindKey
>Table.Edit
>...
>Table.Post
>Table.Close

What I am suggesting is that you do not repeatedly open and close the table.  
Keep it open.  

Also, is there a way to issue an UPDATE query instead?  Cache the updates the
user makes, then start a transaction, fire off a slew of UPDATE queries, then
commit.  (Depends, of course, on exactly what DBMS you are using!)  :-)

An analogous operation would be to cache the updates locally, then grab a
"Write" lock on the table, boom through all the update operations in a batch,
and then release the "Write" lock.  This carries a whole lot more "payload"
with a single batch of read/write operations and gives the DBMS plenty of
opportunities to avoid redundant operations.  For example, acquiring a "Write"
lock means that individual record-locks are no longer required.

Quote:>The FindKey - Edit may be replaced with Append.  This seems very
>straightforward and simple, such that it shouldn't be slow because I
>would think it is a very common algorithm for adding and editing
>records.  Does this algorithm seem inherently slow to anyone?  I'll
>leave the tables open and try it but what about corrupted indexes if the
>machine hangs up or is turned off?

If you have saved all outstanding changes to the database, e.g. with
DbiSaveChanges or a few calls to DbiUseIdleTime, then I would not expect a
problem in this area.