My colleagues and I have recently undergone significant testing
procedures to see how different design choices can impact the
performance of batch (high-performance requiring) operations.
During our work, we found that by using declared FOREIGN KEY constraints
to enforce referential integrity we got better performance than if we
created indexes and checked the constraints using triggers (we're using
ERWin to generate the trigger RI code automatically). Because SQL
Server does not implement any cascading RI rules (e.g. PARENT DELETE
CASCADE), we realize that it may be necessary still to create and
maintain indexes on child tables -- this is undesirable as MANY INDEXES
= LOWER PERFORMANCE.
A few questions for those technically "in the know":
When a Foreign Key constraint is defined, SQL Server makes sure that the
parent data exists on INSERTs (or else it will fail the insert
1) Since there is no evident/necessary index on the parent table,
what is SQL Server using to efficiently enforce the RI rule? Is there
some sort of "hidden index"? Can we access it, and if not, why is it
more efficient than a conventional index?
2) If you attempt to delete the parent row and a child has a
Foreign-Key constraint defined, and there is data in the child table
referencing the row you want to delete, SQL Server will very quickly
give you an error. How does it effectively check the child table? Is
there another "hidden index" on the child to enforce this, or is some
other technique used like a flag on the parent row indicating the
existence of children?
3) If "hidden indexes" are being created, can we use/rely on them
for our own purposes (e.g. query optimization). Are they on the parent,
child, both, or neither?
The second topic question is:
Has anyone run into performance problems due to excessive indexes
created by ERWin's RI index/trigger generation? How did you go about
resolving the problems?
Thanks in advance!