>we have a problem with one database table:
>- many escalations from page level lock to table level lock
>---> finally error message "resource deadlock"
>it is always the same table which is very frequently inserted / updated /
>Would it be a solution to explicitely use table locking for that table?
That would reduce the possibility of deadlock due to escalation, but it might
degrade performance. The reason you are getting deadlocks now is because other
people are using the table when you try to escalate, and they need to escalate
while you are using it. If you prevent that kind of concurrent access by
imposing table level locking you are going to introduce a bottleneck--and if
the table is used as heavily as you say, it is likely to be a worse problem
than the deadlocking.
Deadlocks are detected very quickly, and if the resulting rollback is quick and
the application handles the deadlock automatically, they need not be a very
intrusive problem. You need to decide if that particular cure is worse than
the disease. I suspect it might be.
A better solution would be to reduce the excessive amount of escalating that is
happening. Apart from deadlocks, lots of escalation makes it more likely that
you will have lots of lock-contention and lots of lock-waits too. Reducing
escalation is going to improve performance overall, not just the deadlocking.
Some escalation is always going to occur, but there are four possible causes of
excessive escalation: (1) the application is being too parsimonious with locks
(running with maxlocks set to its pathetically inadequate default of 10 say);
(2) the optimizer is failing to recognize in advance that table-level locking
is going to be required; (3) the queries are insufficiently restricted, or (4)
the transactions include more work than is strictly required for
consistency--the applications do not commit sufficiently frequently.
You can fix the first problem by increasing maxlocks to something
sensible--something in the 100s say. I like to set it to 200 as a starting
point. Maxlocks is set using the SET LOCKMODE statement, either hard-coded
into the application, supplied in an environment variable called ING_SET, or
(on Unix and NT) supplied in an Ingres installation environment variable called
ING_SET (see ingsetenv in the manual).
You can often fix the second problem by ensuring that you have sufficiently
fresh and accurate (and relevant) statistics for all the tables involved. The
optimizer is blind without that information. (This is a big topic and I don't
intend to exhaust it here I am afraid.) You use optimizedb to gather
statistics. See the DBA Guide for details.
The solution to the third problem is simply a matter of ensuring that queries
contain all the restrictions they can. It is better to restrict the data in
the server rather than downloading a vast number of rows to the client to be
filtered there. Apart from the intense inter-process communication required
(possibly over a network), as far as the server is concerned, you've read that
data and it needs to lock it to ensure read-consistency for you. (One could
also play with transaction isolation modes, perhaps choosing to use
read_committed say, if that is appropriate.)
The fourth solution is just a matter of applying the principle that one should
commit as soon as possible, and no sooner. Once you've done just enough
updates to maintain logical consistency, you _must_ commit. The longer you
wait--the more logical transactions you include in one physical
transaction--the more locks you accumulate, which leads to escalation.
I could go on about this for hours, but I'll leave it to others. I have to go
and get some breakfast!
<http://www.rationalcommerce.com/>Rational Commerce Ltd.
"Ingres development, tuning, and training experts"