Showing posts with label performance. Show all posts
Showing posts with label performance. Show all posts

Wednesday, March 21, 2012

Out of options, Need performance help.

hey guys, i need your help please.
here is the scenario:

1. I need to return a data back to client (result set varies 20-10,000)
2. I only want to show 20 records at a time
3. To get info i need to display i need to join 10 tables

When there are small #s of records it works but when i get over 8000 then it becomes a problem:

1. The first version was:
Get all data using big query and return everything back to client and display only 20 at a time (not very proficient).
Takes around 15 seconds to view 20 records.

2. Inspired by 4GuysFromRolla (http://www.4guysfromrolla.com/webtech/062899-1.shtml)
Use Stored Procedure w/ server side paging logic to get 20 records at the time. I had to pass every filter parameter and stuff. SP had to sort resultset and return only 20 records i need to display.
Takes around 5 seconds to view 20 records.

I still think it's slow, i know this is a very broad question but is there any other way to do it, logically?

Any help is appreciated.
Wojo.You coud try running a few instances of the search in anticipation of the Next/Prev scenario.

While users are reading 20 in their window the application is downloading a further 20 behind the scenes asynchronously. I couldn't give an example. At least they wouldn't be aware of the 5 seconds if it took longer to read 20.

The only other example I have is for Delphi, sorry.|||Well...you need to order by something...and you need to know what page your on...

How about..

USE Northwind
GO

DECLARE @.Page int, @.sql varchar(8000)
SELECT @.Page = 2
SELECT @.sql = 'SELECT * FROM (SELECT TOP 20 * FROM (SELECT TOP '
+ CONVERT(varchar(10),@.Page * 20)
+ ' * FROM Orders ORDER by OrderId) AS A '
+ ' ORDER By OrderID Desc) AS B '
+ ' ORDER BY OrderId'
EXEC(@.sql)|||Thanks guys,
Brett, how does this work?|||To see it work...go to QA...cut and paste the code and execute it...

What it does roughly is that if you want page x, the inner query will select the TOP x * 20 rows (last 20 will be page x...

Then order by a key descending to get the last 20 rows..the reorder those by the key, so page 20 will be in the correct order to be displayed...

test it, let me know if it works for you...

Oh, and btw, this will only transmit the 20 rows across..you don't want to be moving large chunks for no reason...especially since data is fluid and may change at any time...|||Hmm,

What if i want to sort by other columns, plus my IDs aren't sequential :(|||You have to order by something...

The order of data in a database is meaningless....

And yes can can order by more than 1 thing...

And it doesn't have to be an int...

Look at a phonebook...they don't order by phone number...|||Thanks Brett,

it looks like it works preety well, i'm getting 1-2 seconds on couple of first pages!

One last question: is there any way to get total number of records from the inner query? and i would rather not requery this, any way to get the count while doing what we are doing?

thanks|||What's the total row count?

Did you try it for the last page yet?

And...

One last question: is there any way to get total number of records from the inner query? and i would rather not requery this, any way to get the count while doing what we are doing?

Why does this matter to you? Remembering the page number should be more important...no?

And cool site btw...do you sell just parts or whole systems...where are you based out of?|||The row count was ~9k
I did and it took 6-7 seconds, which is not that bad considering almost nobody looks at last pages :)

It matters to me because i'm displaying total number of records and i need this number to drive my paging logic (ex. page 1 out of 12)

Re: my site, its just my personal project, this question is for work :) and no i actually don't sell anything in there, i wrote this site to allow people to sell buy hardware for no charge...

thanks again for all your help man!|||What are you SELECTING, and what's the ORDER BY on?

Are they all indexed?

I don't like 6/7 seconds....|||i will look into indexes later, i would like to get this running first, so i need total count of records...|||Wouldn't that just be @.Pages*20?|||lol, i need to know what @.PAGES is don't i?

i don't think i made myself clear, i want to know the total number of records i'm gonna page myself though, ex:

Found 100 records, you are on page number 1 out of 5 (since there is 20 recs per page)

thanks.|||"i will look into indexes later"

If you are concerned about performance NOW, then you need to look into indexes NOW. I wondered what kind of indexes you had when I read that it started getting slow at 8000 records. 8000 records is peanuts.

Doing a select count(*) to get the rowcount should not be too expensive. Another method is to select the rowcount from the sysindexes table.|||I should have mentined before but I do this instinctivly, you may already be doing it.

When you join yuor tables dont use * and avoid large text fields. can be quite a surprise how long they take.

Try a skeleton version of the query that just selects and joins the keys. Then work up to include fields you need.

Sorry if you are allready doing that. And nice site you have there.

Friday, March 9, 2012

OSQL performance slow vs ISQLW

I have a script to run against a SQL Server 2000 database Windows 2003 OS).
The script creates temp tables and then uses the manipulated data to update
the database.
The probelem is when I run the script in Query Analyzer it takes less then
10 minutes to execute. When run the same script using OSQL it takes over an
hour.
Has anyone seen anything like this? Any suggestions would be great.
Possibly different SET setting can affect things. Also, you could try SET NOCOUNT ON and see if it
makes any difference.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Wex" <Wex@.discussions.microsoft.com> wrote in message
news:1B68CD56-168D-419F-A01D-8FF310F2EC96@.microsoft.com...
>I have a script to run against a SQL Server 2000 database Windows 2003 OS).
> The script creates temp tables and then uses the manipulated data to update
> the database.
> The probelem is when I run the script in Query Analyzer it takes less then
> 10 minutes to execute. When run the same script using OSQL it takes over an
> hour.
> Has anyone seen anything like this? Any suggestions would be great.

OSQL performance slow vs ISQLW

I have a script to run against a SQL Server 2000 database Windows 2003 OS).
The script creates temp tables and then uses the manipulated data to update
the database.
The probelem is when I run the script in Query Analyzer it takes less then
10 minutes to execute. When run the same script using OSQL it takes over an
hour.
Has anyone seen anything like this? Any suggestions would be great.Possibly different SET setting can affect things. Also, you could try SET NO
COUNT ON and see if it
makes any difference.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Wex" <Wex@.discussions.microsoft.com> wrote in message
news:1B68CD56-168D-419F-A01D-8FF310F2EC96@.microsoft.com...
>I have a script to run against a SQL Server 2000 database Windows 2003 OS).
> The script creates temp tables and then uses the manipulated data to updat
e
> the database.
> The probelem is when I run the script in Query Analyzer it takes less then
> 10 minutes to execute. When run the same script using OSQL it takes over
an
> hour.
> Has anyone seen anything like this? Any suggestions would be great.

OSQL performance slow vs ISQLW

I have a script to run against a SQL Server 2000 database Windows 2003 OS).
The script creates temp tables and then uses the manipulated data to update
the database.
The probelem is when I run the script in Query Analyzer it takes less then
10 minutes to execute. When run the same script using OSQL it takes over an
hour.
Has anyone seen anything like this? Any suggestions would be great.Possibly different SET setting can affect things. Also, you could try SET NOCOUNT ON and see if it
makes any difference.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Wex" <Wex@.discussions.microsoft.com> wrote in message
news:1B68CD56-168D-419F-A01D-8FF310F2EC96@.microsoft.com...
>I have a script to run against a SQL Server 2000 database Windows 2003 OS).
> The script creates temp tables and then uses the manipulated data to update
> the database.
> The probelem is when I run the script in Query Analyzer it takes less then
> 10 minutes to execute. When run the same script using OSQL it takes over an
> hour.
> Has anyone seen anything like this? Any suggestions would be great.

Wednesday, March 7, 2012

OSQL Performance Problem

System info:
Win 2003 Server, SQL Server 2000, test server - no outside access from other users (no other applications running)

Table info:
DocId(int), a(nvarchar(10)), b(nvarchar(20)), c(nvarchar(10)), d(nvarchar(20)), e(nvarchar(10))
DocId has a number, all other columns are null. No index

I am using a stored procedure that updates the values based on the DocId. I have an program that creates a sql script file that should be executed. Approx. 440000 lines.

Example:
Using TableName
Go
SET NOCOUNT ON
GO
exec sp_SPNAME @.docId=1, @.a = 'blah', @.b = 'blah', @.c = 'blah', @.d = 'blah', @.e = 'blah'
GO
exec sp_SPNAME @.docId=2, @.a = 'blah', @.b = 'blah', @.c = 'blah', @.d = 'blah', @.e = 'blah'
GO
repeats 440K.

Question: When I execute this script per osql.exe, the update takes more the 24 hours... Any suggestions?

Thanks in advance.System info:
Win 2003 Server, SQL Server 2000, test server - no outside access from other users (no other applications running)

Table info:
DocId(int), a(nvarchar(10)), b(nvarchar(20)), c(nvarchar(10)), d(nvarchar(20)), e(nvarchar(10))
DocId has a number, all other columns are null. No index

INDEX !!!

Put a clustered index on DocID.

CREATE CLUSTERED INDEX IXc_TableName_DocID ON TableName (DocID)
GO

Regards,

hmscott|||another route is to bulkcopy/insert all the new data into a table then do a single update against the base table. Index on docid would be desired when you start dml.|||Hi hmscott,

thanks for the reply and the sql. I added the index and, while it is much faster, it still takes more than 8 hours. Maybe this is normal for executing 440,000 statements?

Thanks,
Lens|||Hi oj,

thanks to you as well for the reply. I will change my program to make a csv file and see if a bulk update increases the speed.

Thanks,
Lens|||Hi,

just a quick status update. I changed my program to create a csv-file. Approx. 440,000 lines imported into temp table; less than 2 minutes. Update into final table, less than two minutes...zoinks.

Thanks again.