Quote:> We have a database with 2.5million+ records.
> We are finding that returning queries is very slow.
You need to do some performance tuning. I do this, and there's a number
of other regular posters here who do it on a regular basis. You can also
try doing this yourself- there's lots of information available on how to
go about this, but it's more of an art than a science.
Generally speaking, some queries will always take longer than others. If
you are retrieving every row in a 2.5M row table, there is no way to make
it go faster short of throwing faster hardware at the problem. However,
some queries, like range searches, aggregates and grouping queries, can
be tuned by examining both the query and the indexes on the table.
Simple queries (select ... where UniqueKey = <somevalue>) should be very
fast, due to the nature of balanced-tree indexes. You just need to be
sure you're not invalidating the search argument in the query, and that
you've defined an index that can actually help.
Quote:> Does anyone have some benchmark speeds for performance, showing what
> difference more RAM makes, or different SQL server set ups etc.. ?
If you haven't done some work with sp_configure, this is definitely the
first place to start. Get a good book (6.5 Unleashed works for me) and an
afternoon, and understand the basics of memory settings, procedure cache,
user connections, etc.
Quote:> I've heard it is possible to put data into a DB with a comma delimited
> file. Is this possible if so how?
Use BCP. This is a bulk copy utility available in the \mssql\binn
directory. It's a command line utility. For CSV files, some folks find it
easier to import the data to Excel or Access (to use the ISAM drivers)
then put it into SQL Server. This is an easy approach for a one time
deal, but BCP is the way to go for regular imports. Read up in the books
Quote:> We are currently adding records using single statements for each row, is
> there a faster way?
Maybe. Depends on where the data comes from. If you're adding it from a
front end or some other manual source, the best way to get speed is to
batch your inserts:
Make sure you understand the ramifications of an insert failing inside
the transaction- a failure of the fifth insert undoes the work
of the four that came before it. Experiment with the number of inserts in
Depending on concurrency, you may be able to get more speed on inserts by
defining your clustered index to distribute the insert location. Playing
with the fillfactor on a regular basis improves insert performance,
especially if there's lots of indexes on a table.
> I've seen people mention BCP, what is this?
> In advance.. thankyou
> Matt Pearson
Ben McEwan, President, Geist, LLC