Concurrency/reader-writer problem

Concurrency/reader-writer problem

Post by Boris Burti » Fri, 11 Jul 1997 04:00:00



This is a multi-part message in MIME format.
--------------9DC844B8D29386BA7B504CD7
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

We are running an OLTP (multiple readers, one writer) environment with
Microsoft SQL server.  I was hoping to get some suggestions on how we
can increase concurrency on the database.

The database is being used as a repository for stock quote data.  Users
are constantly retrieving stock quotes.  Meanwhile, the stock feed
processor is writing into the same table.  So while the update is
happening, everyone gets locked out.

I am looking for a way to increase the concurrency on the system.  For
example, if a quote is being updated, I'd like the user to get old data,
rather than have to wait for the update to finish.  Unfortunately there
doesn't seem to be a way to do this in Sybase/MS-SQL.  I have tried
using dirty reads, with limited success (comes back quickly, but
sometimes doesn't return data).

Does anyone have a suggestion on how to resolve this problem?  Do other
database vendors (relational or object) handle this situation more
gracefully?  Would row-level locking on Oracle or some other database
make a huge difference?

I'd appreciate any advice.

Boris Burtin
The Daily Rocket Company
--------------9DC844B8D29386BA7B504CD7
Content-Type: text/x-vcard; charset=us-ascii; name="vcard.vcf"
Content-Transfer-Encoding: 7bit
Content-Description: Card for Burtin, Boris
Content-Disposition: attachment; filename="vcard.vcf"

begin:          vcard
fn:             Burtin, Boris
n:              Burtin;Boris
org:            The Daily Rocket Company
adr:            655 Castro St. #8;;;Mountain View;CA;94041;USA

title:          Databoy
tel;work:       (415) 903-1701
tel;fax:        (415) 903-1705
tel;home:       (415) 321-9358
x-mozilla-cpt:  ;0
x-mozilla-html: FALSE
end:            vcard

--------------9DC844B8D29386BA7B504CD7--

 
 
 

Concurrency/reader-writer problem

Post by Anthony Mandi » Sat, 12 Jul 1997 04:00:00



> We are running an OLTP (multiple readers, one writer) environment with
> Microsoft SQL server.  I was hoping to get some suggestions on how we
> can increase concurrency on the database.

> The database is being used as a repository for stock quote data.  Users
> are constantly retrieving stock quotes.  Meanwhile, the stock feed
> processor is writing into the same table.  So while the update is
> happening, everyone gets locked out.

        If by update, you really mean update then this is part of your
        problem.

Quote:> I am looking for a way to increase the concurrency on the system.  For
> example, if a quote is being updated, I'd like the user to get old data,
> rather than have to wait for the update to finish.  Unfortunately there
> doesn't seem to be a way to do this in Sybase/MS-SQL.  I have tried
> using dirty reads, with limited success (comes back quickly, but
> sometimes doesn't return data).

        You should be inserting new records for a start rather than
        doing updates. This was, at least, some of the older records
        would be readable. Use some form of timestamp (i.e. the
        current date and time - a datetime field type perhaps, not the
        timestamp field type) to distingush between the rows.

Quote:> Does anyone have a suggestion on how to resolve this problem?  Do other
> database vendors (relational or object) handle this situation more
> gracefully?  Would row-level locking on Oracle or some other database
> make a huge difference?

        The big problem you are having is with page level locking.
        Revise your table design to alter the nature of the clustered
        index to maximise the distribution spread of your table's data.
        This will help to reduce the contention somewhat. And use
        inserts instead of update to generate more data and spread
        it around on the table's pages.

-am

 
 
 

Concurrency/reader-writer problem

Post by David Bratto » Sat, 12 Jul 1997 04:00:00



> We are running an OLTP (multiple readers, one writer) environment with
> Microsoft SQL server.  I was hoping to get some suggestions on how we
> can increase concurrency on the database.

> The database is being used as a repository for stock quote data.  Users
> are constantly retrieving stock quotes.  Meanwhile, the stock feed
> processor is writing into the same table.  So while the update is
> happening, everyone gets locked out.

> I am looking for a way to increase the concurrency on the system.  For
> example, if a quote is being updated, I'd like the user to get old data,
> rather than have to wait for the update to finish.  Unfortunately there
> doesn't seem to be a way to do this in Sybase/MS-SQL.  I have tried
> using dirty reads, with limited success (comes back quickly, but
> sometimes doesn't return data).

Sounds like you're doing indirect updates. That is, delete followed by
insert instead of update. Review the rules for direct .vs. indirect and
see if you can make the update program do direct updates. Then dirty
reads
should work for you.

Quote:

> Does anyone have a suggestion on how to resolve this problem?  Do other
> database vendors (relational or object) handle this situation more
> gracefully?  Would row-level locking on Oracle or some other database
> make a huge difference?

If your update program is doing a mass updates instead of one row at a
time you
might try coding it to do table locks instead of page locks - assuming
you have
the source for the update program. That should make each update happen
faster.
Other queries at normal isolation levels will block until the exclusive
table
lock is granted and the update occurs though, so it may or may not help.
Have
to test and find out. In general, I usually get better throughput when
mass
updates are happening if I use table locks, but some individual queries
may
take longer.
______________________________________________________________________________
David Bratton
University of Houston System                    THE BILL OF RIGHTS

 
 
 

Concurrency/reader-writer problem

Post by David Lance Wol » Sat, 12 Jul 1997 04:00:00



> We are running an OLTP (multiple readers, one writer) environment with
> Microsoft SQL server.  I was hoping to get some suggestions on how we
> can increase concurrency on the database.

> The database is being used as a repository for stock quote data.  Users
> are constantly retrieving stock quotes.  Meanwhile, the stock feed
> processor is writing into the same table.  So while the update is
> happening, everyone gets locked out.

> I am looking for a way to increase the concurrency on the system.  For
> example, if a quote is being updated, I'd like the user to get old data,
> rather than have to wait for the update to finish.  Unfortunately there
> doesn't seem to be a way to do this in Sybase/MS-SQL.  I have tried
> using dirty reads, with limited success (comes back quickly, but
> sometimes doesn't return data).

> Does anyone have a suggestion on how to resolve this problem?  Do other
> database vendors (relational or object) handle this situation more
> gracefully?  Would row-level locking on Oracle or some other database
> make a huge difference?

> I'd appreciate any advice.

> Boris Burtin
> The Daily Rocket Company

>     ---------------------------------------------------------------

>                                Name: vcard.vcf
>              Part 1.2          Type: text/x-vcard
>                            Encoding: 7bit
>                         Description: Card for Burtin, Boris

If you have a unique index on the table, you could try having the
readers running at Transaction Isolation Level 0, which allows for dirty
reads, i.e. reads of uncommitted data.

Dave Wolf
Sybase Professional Services

.

 
 
 

Concurrency/reader-writer problem

Post by Michael Welcom » Sat, 12 Jul 1997 04:00:00


Dear Boris,

I have some ideas that you may consider.  First, It should be possible
to update data in a table without locking every one else out.  If this
is happening it may be becase you are updaing a field that affects
an index.  If that is the case try to determine if the index is REALLY
needed for other purposes.

Second, determine if there are other factors that may be slowing
the updates down.  For example, do the tables have triggers on
them?  Are these trigger essential or is there some other way
of doing the job they do?

Third are you performing multiple updates in the span of one transactions
using the TSQL BEGIN TRANS/END TRANS?  If so are there other
ways of handling this?

Fourth, it is true that if you specify the NOLOCK option you will some
times get an error code if the Server is in the middle of an update.  The
documentation indicates that this should be handled by reissuing
the call.

You may also consider your database structure.  You might consider
something like this:

securities
--------------
sec_num
cur_price
price_experation

securities_prices
-------------------------
security_num
price_time
price

If you are ussally retieving only the most current price.  Then
you can use the cur_price cached in the parent record.  You
can place an experation time stamp on the price and when the
time has expired retrieve the new most current price of the
securities prices database cache the price and reset the time
stamp.  This would limit the number of updates to
the table that is used to retrieve the prices.  

--
Regards,

Michael Welcome


check out  http://www.sqltech.com



Quote:> We are running an OLTP (multiple readers, one writer) environment with
> Microsoft SQL server.  I was hoping to get some suggestions on how we
> can increase concurrency on the database.

> The database is being used as a repository for stock quote data.  Users
> are constantly retrieving stock quotes.  Meanwhile, the stock feed
> processor is writing into the same table.  So while the update is
> happening, everyone gets locked out.

> I am looking for a way to increase the concurrency on the system.  For
> example, if a quote is being updated, I'd like the user to get old data,
> rather than have to wait for the update to finish.  Unfortunately there
> doesn't seem to be a way to do this in Sybase/MS-SQL.  I have tried
> using dirty reads, with limited success (comes back quickly, but
> sometimes doesn't return data).

> Does anyone have a suggestion on how to resolve this problem?  Do other
> database vendors (relational or object) handle this situation more
> gracefully?  Would row-level locking on Oracle or some other database
> make a huge difference?

> I'd appreciate any advice.

> Boris Burtin
> The Daily Rocket Company

 
 
 

Concurrency/reader-writer problem

Post by Michael F. Nicewarne » Sat, 12 Jul 1997 04:00:00


Another thought would be to the manner in which the "writer" is updating
the data.  If this is a single transaction to update all data, then the
rows/pages will be locked consecutively until the transaction is finished
(ends up locking the whole table in the process).  If you can "break down"
the update process to commit more often, then the scope of the locking will
be substantially reduced.  Also, how often do you update the clients?  It
really doesn't make sense to update them more than once each 15 minutes or
so.  Most stock services don't update more often than that anyway.
Reducing the frequency of reads might also help your situation.
Personally, I think always inserting rows, as Anthony suggests, isn't
really the best solution for this application.  If you are working with
stock quotes, you only want the latest value.  Hmmm, unless that is a
stated requirement....  If that *is* the case, then you will need to
constantly insert.

Mike Nicewarner

 
 
 

Concurrency/reader-writer problem

Post by Boris Burti » Tue, 15 Jul 1997 04:00:00



> Sounds like you're doing indirect updates. That is, delete followed by
> insert instead of update. Review the rules for direct .vs. indirect and
> see if you can make the update program do direct updates. Then dirty
> reads
> should work for you.

I'm using BCP to blow the latest feed data into a temp table and then
running a join query to update the production table from the temp
table.  Where can I get more info on direct vs. indirect updates?  SQL
Server docs?  web?  book?

Quote:> If your update program is doing a mass updates instead of one row at a
> time you
> might try coding it to do table locks instead of page locks - assuming
> you have
> the source for the update program.

That's a good idea.  I'll take a look and see if that helps.  It would
be a benefit to make everyone wait a few seconds, if I can make the
update run faster.

Thanks for the help!

Boris

 
 
 

Concurrency/reader-writer problem

Post by Boris Burti » Tue, 15 Jul 1997 04:00:00




> > I am looking for a way to increase the concurrency on the system.  For
> > example, if a quote is being updated, I'd like the user to get old data,
> > rather than have to wait for the update to finish.  Unfortunately there
> > doesn't seem to be a way to do this in Sybase/MS-SQL.  I have tried
> > using dirty reads, with limited success (comes back quickly, but
> > sometimes doesn't return data).

>         You should be inserting new records for a start rather than
>         doing updates. This was, at least, some of the older records
>         would be readable. Use some form of timestamp (i.e. the
>         current date and time - a datetime field type perhaps, not the
>         timestamp field type) to distingush between the rows.

Wow, that's freakin' brilliant.  A hack, but still brilliant.  In an
ideal world, the database would be doing this for me.  Of course I'm
working with Microsoft technology so I can't ask for ideals.  This
should be easy enough to implement--just add an extra column to the
table and set the default to GETDATE().

Quote:>         The big problem you are having is with page level locking.
>         Revise your table design to alter the nature of the clustered
>         index to maximise the distribution spread of your table's data.
>         This will help to reduce the contention somewhat. And use
>         inserts instead of update to generate more data and spread
>         it around on the table's pages.

I'll look into this.  Are you saying I should alter the number of
records that get stored on each page of data?  I guess if I can get SQL
Server to store one row per page I can simulate row-level locking?
Sorry if I have the wrong idea.  I haven't tweaked indexes before.

Thanks for the great tips!

Boris

 
 
 

Concurrency/reader-writer problem

Post by Anthony Mandi » Wed, 16 Jul 1997 04:00:00



> > Sounds like you're doing indirect updates. That is, delete followed by
> > insert instead of update. Review the rules for direct .vs. indirect and
> > see if you can make the update program do direct updates. Then dirty
> > reads should work for you.

        I don't think that you'd be getting deferred updates. Sybase had
        relaxed the restrictions on direct updates, but MS SQL Server
        would be using the older ruleset - unless they too have made
        changes. Regardless, they would both be doing direct updates if
        you are updating a non-null field that isn't part of an index.
        If this checks out for you, then its just a performance issue
        with updates vs reads.

Quote:> I'm using BCP to blow the latest feed data into a temp table and then
> running a join query to update the production table from the temp
> table.  Where can I get more info on direct vs. indirect updates?  SQL
> Server docs?  web?  book?

        There is some info at Sybase's web site (I can't recall offhand
        now the exact location, do a search and look for whitepapers etc.)
        There is also some info in their manuals. I don't know about MS
        unfortunately. If you switch to doing inserts, then this might
        help overall performance since you remove the insertion into a
        temp table first and then join. The join itself could also be a
        possible source of problems. You may also need to switch from
        using BCP to T-SQL insertion itself and monitor for any deadlocks
        on the client side so that do can redo the insert on a rollback.
        This all depends on how active your table is. Does MS SQL Server
        have any tools akin to sybase's SQL Monitor for monitoring server
        performance at a fine level of granularity?

Quote:> > If your update program is doing a mass updates instead of one row at a
> > time you
> > might try coding it to do table locks instead of page locks - assuming
> > you have
> > the source for the update program.

> That's a good idea.  I'll take a look and see if that helps.  It would
> be a benefit to make everyone wait a few seconds, if I can make the
> update run faster.

        I think it really depends on how many entres are inserted/updated
        at a time compared to how many readers. Obviously, the idea scenario
        is to have it all working in realtime. This depends a lot on your
        hardware as well.

-am

 
 
 

Concurrency/reader-writer problem

Post by Business Revie » Fri, 18 Jul 1997 04:00:00




Quote:> We are running an OLTP (multiple readers, one writer) environment with
> Microsoft SQL server.  I was hoping to get some suggestions on how we
> can increase concurrency on the database.

I'm evaluating ObjectStore for the exact same application (price quoting).

It has a special mechanism (MVCC) which alows 1 writer and multiple
readers. It works through the log. The writer writes new pages of
information in the log file (this is default behaviour in ObjectStore).
MVCC lets readers get older versions of the same page from the log file
while the writer is writing. The special mechanism makes sure that the
older versions are always available for readers.

I'm going to test it in a week or 2.

Richard.

 
 
 

1. Concurrency/reader-writer problem

This is a multi-part message in MIME format.
--------------9DC844B8D29386BA7B504CD7
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

We are running an OLTP (multiple readers, one writer) environment with
Microsoft SQL server.  I was hoping to get some suggestions on how we
can increase concurrency on the database.

The database is being used as a repository for stock quote data.  Users
are constantly retrieving stock quotes.  Meanwhile, the stock feed
processor is writing into the same table.  So while the update is
happening, everyone gets locked out.

I am looking for a way to increase the concurrency on the system.  For
example, if a quote is being updated, I'd like the user to get old data,
rather than have to wait for the update to finish.  Unfortunately there
doesn't seem to be a way to do this in Sybase/MS-SQL.  I have tried
using dirty reads, with limited success (comes back quickly, but
sometimes doesn't return data).

Does anyone have a suggestion on how to resolve this problem?  Do other
database vendors (relational or object) handle this situation more
gracefully?  Would row-level locking on Oracle or some other database
make a huge difference?

I'd appreciate any advice.

Boris Burtin
The Daily Rocket Company
--------------9DC844B8D29386BA7B504CD7
Content-Type: text/x-vcard; charset=us-ascii; name="vcard.vcf"
Content-Transfer-Encoding: 7bit
Content-Description: Card for Burtin, Boris
Content-Disposition: attachment; filename="vcard.vcf"

begin:          vcard
fn:             Burtin, Boris
n:              Burtin;Boris
org:            The Daily Rocket Company
adr:            655 Castro St. #8;;;Mountain View;CA;94041;USA

title:          Databoy
tel;work:       (415) 903-1701
tel;fax:        (415) 903-1705
tel;home:       (415) 321-9358
x-mozilla-cpt:  ;0
x-mozilla-html: FALSE
end:            vcard

--------------9DC844B8D29386BA7B504CD7--

2. Server Description in DSN

3. DBGrid: bookmark of inserted/updated row

4. Native Java .DBF reader/writer

5. Why was the dbms server spinning ?

6. Any ideas on how to make RTF reader/writer for VFP or FPW

7. Compact, Repair, ADO

8. Reader/Writer Question

9. Looking for info on reader/writer scenario

10. VB Acrobat Writer / Reader and signature

11. Writers block readers