SQL Server Paging

Paging is a technique where instead of returning the entire result of a query in one go, you return a page at a time.    Why would you do this?

Consider that you have a table that contains 1000 rows of data.  To get all the data from the table you might need to lookup information from several tables.  Whilst this isn’t a problem for a relational database there is a bigger issue.   What is most of the data isn’t needed?  What do I mean?   Take an example like Amazon.  You do a search for a product and you get tend to get multiple pages of results.  Rather than display all 1000 products at once, you get to see 20 at a time, if you want the next 20, you ask for page 2, and so on until you either find what you are looking for or run out of things to look for.

 

Paging data is something that can be done on the client application.  For this technique, you select everything from the database send it down to the client and the client application only shows the records that you have asked for.

There are advantages to doing this.   Firstly, you can get the entire product list and do some fancy things like changing the order of the items on the fly.  You can order by price, by newest products first, by popularity, by customer rating.   Whilst you are at it, you can easily flick between ascending and descending orders.

However, for those 1000 rows of data returned, that means that 980 rows don’t get shown.   That’s a complete waste of processing power and network resources.  All of which are valuable when it comes to writing high performing application.

With a bit more effort we can achieve a similar behaviour at the database and have the database return only the 20 rows that you need.

 

To demonstrate paging techniques, I’m going to make use of the World Wide Importers database.

select
	*
from [Purchasing].[PurchaseOrders]

 

With this simple query we can get the results of the purchase orders.

Taking a quick look at this table, there’s 2074 rows.   It took hardly any time to create the output, however on the downside there’s a few things that are normalised (SupplerID, DeliveryMethodID, ContactPersonID) these will require extra lookups.  Adding joins for to get this data is not a big problem.  Let’s update the query.

 

select
	po.PurchaseOrderID,
	--po.SupplierID,
	supplier.SupplierName,
	supplier.PhoneNumber,
	po.OrderDate,
	--po.DeliveryMethodID,
	deliveryMethods.DeliveryMethodName,
	--po.ContactPersonID,
	contactPeople.FullName,
	contactPeople.EmailAddress,
	po.ExpectedDeliveryDate,
	po.SupplierReference,
	po.IsOrderFinalized,
	po.Comments,
	po.InternalComments,
	--po.LastEditedBy,
	people.FullName,
	people.EmailAddress,
	po.LastEditedWhen
from [Purchasing].[PurchaseOrders] as po
join [Application].[People] as people on people.PersonID = po.LastEditedBy
join [Purchasing].[Suppliers] as supplier on supplier.SupplierID = po.SupplierID
join [Application].[DeliveryMethods] as DeliveryMethods on deliveryMethods.DeliveryMethodID = po.DeliveryMethodID
join [Application].[People] as contactPeople on contactPeople.PersonID = po.ContactPersonID
order by
	PurchaseOrderID desc

I’ve left the comments in so that you can see how I’ve developed this query.

Let’s take a look at the statistics IO

(2074 row(s) affected)
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'People'. Scan count 1, logical reads 4302, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'PurchaseOrders'. Scan count 1, logical reads 19, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Suppliers'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'DeliveryMethods'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

Notice how there are 4302 logical reads on the people table.  The PurchaseOrders table only has 19 logical reads.  This means that SQL is doing alot of work to find peoples names and e-mail addresses.   Which is fair enough as this table is referenced twice in the query.

 

What do we need for paging?

We need to know several things…  Which page we’re going to view, how many rows per page, the order of the rows on each page (the sort order).    In practical terms, we need SQL Server to tell us how many rows there are in total 2074 is the magic number for our example.

 

Here’s one method that I like to use, this code should be able to run on SQL 2005 and above without issues.

 

declare @page int = 1; --Page 1 is the first page
declare @pageSize int = 20;

-- Variables above this line can 

if OBJECT_ID('tempdb..#PagingTable' ) is not null
	drop table #PagingTable

declare @startRow int = ((@page - 1 ) * @pageSize) + 1;
declare @endRow int = @startRow + (@pageSize - 1);

select
    ROW_NUMBER() over ( order by PurchaseOrderID desc ) as rowNum,
	po.PurchaseOrderID
into #PagingTable
from [Purchasing].[PurchaseOrders] as po
order by
	PurchaseOrderID desc;

declare @rowCount int = @@RowCount;

select
	po.PurchaseOrderID,
	--po.SupplierID,
	supplier.SupplierName,
	supplier.PhoneNumber,
	po.OrderDate,
	--po.DeliveryMethodID,
	deliveryMethods.DeliveryMethodName,
	--po.ContactPersonID,
	contactPeople.FullName,
	contactPeople.EmailAddress,
	po.ExpectedDeliveryDate,
	po.SupplierReference,
	po.IsOrderFinalized,
	po.Comments,
	po.InternalComments,
	--po.LastEditedBy,
	people.FullName,
	people.EmailAddress,
	po.LastEditedWhen
from #PagingTable as pt
join [Purchasing].[PurchaseOrders] as po on pt.PurchaseOrderID = po.PurchaseOrderID
join [Application].[People] as people on people.PersonID = po.LastEditedBy
join [Purchasing].[Suppliers] as supplier on supplier.SupplierID = po.SupplierID
join [Application].[DeliveryMethods] as DeliveryMethods on deliveryMethods.DeliveryMethodID = po.DeliveryMethodID
join [Application].[People] as contactPeople on contactPeople.PersonID = po.ContactPersonID
where pt.rowNum between @startRow and @endRow
order by
	pt.rowNum asc;


select
	@startRow as startRow,
	@endRow as endRow,
	@rowCount as rows;

Here’s the result, note 21 rows over the batch.  That’s 20 for the outputted query and one for some extra information

Let’s take a look at the statistics io for this query.

Table 'PurchaseOrders'. Scan count 1, logical reads 19, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(2074 row(s) affected)

(1 row(s) affected)

(20 row(s) affected)
Table 'DeliveryMethods'. Scan count 0, logical reads 40, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Suppliers'. Scan count 0, logical reads 40, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'People'. Scan count 0, logical reads 96, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'PurchaseOrders'. Scan count 1, logical reads 19, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table '#PagingTable________________________________________________________________________________________________________00000000004C'. Scan count 1, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

(1 row(s) affected)

There’s some things that are worth noting about this.   Firstly the People table now only has 96 logical reads, down from the 4302 for the full query.   However the purchase orders table still has 19 reads,  what’s going on there.  Should that be smaller?   Well is should be, but it’s not because the temp table isn’t optimised.   So we end up with 2 clustered index scans.  Let’s see if we can optimise this a bit more.

I tried adding a primary key and clustering to the #PagingTable  however, this didn’t have the desired effect.  Here’s the code.

declare @page int = 1; --Page 1 is the first page
declare @pageSize int = 20;

-- Variables above this line can 

if OBJECT_ID('tempdb..#PagingTable' ) is not null
	drop table #PagingTable

declare @startRow int = ((@page - 1 ) * @pageSize) + 1;
declare @endRow int = @startRow + (@pageSize - 1);

create table #PagingTable(
	rowNum int not null primary key clustered,
	PurchaseOrderID int not null
)

insert into #PagingTable(rowNum,PurchaseOrderID)
select
    ROW_NUMBER() over ( order by PurchaseOrderID desc ) as rowNum,
	po.PurchaseOrderID
from [Purchasing].[PurchaseOrders] as po

declare @rowCount int = @@RowCount;

select
	po.PurchaseOrderID,
	--po.SupplierID,
	supplier.SupplierName,
	supplier.PhoneNumber,
	po.OrderDate,
	--po.DeliveryMethodID,
	deliveryMethods.DeliveryMethodName,
	--po.ContactPersonID,
	contactPeople.FullName,
	contactPeople.EmailAddress,
	po.ExpectedDeliveryDate,
	po.SupplierReference,
	po.IsOrderFinalized,
	po.Comments,
	po.InternalComments,
	--po.LastEditedBy,
	people.FullName,
	people.EmailAddress,
	po.LastEditedWhen
from #PagingTable as pt
join [Purchasing].[PurchaseOrders] as po on pt.PurchaseOrderID = po.PurchaseOrderID
join [Application].[People] as people on people.PersonID = po.LastEditedBy
join [Purchasing].[Suppliers] as supplier on supplier.SupplierID = po.SupplierID
join [Application].[DeliveryMethods] as DeliveryMethods on deliveryMethods.DeliveryMethodID = po.DeliveryMethodID
join [Application].[People] as contactPeople on contactPeople.PersonID = po.ContactPersonID
where pt.rowNum between @startRow and @endRow
order by
	pt.rowNum asc;


select
	@startRow as startRow,
	@endRow as endRow,
	@rowCount as rows;

The result is that you still get 19 logical reads of the [Purchasing].[PurchaseOrders] table on the second query.  The logical reads on the temp table did change from 6 down to 2, so that was a bonus.

The simplest way to optimise is include all the data that you need from the table into the temp table.  This should lower the overall cost of the query batch.  In this case the starting cost is 44 logical reads total between the PurchaseOrders table and the temp table.

By adding the columns to the temp table, this changed the total logical reads to 44.   That’s exactly the same.   Let’s try to modify the temp table and see if we can improve this.    Here’s some updates code.

declare @page int = 1; --Page 1 is the first page
declare @pageSize int = 20;

-- Variables above this line can 

if OBJECT_ID('tempdb..#PagingTable' ) is not null
	drop table #PagingTable

declare @startRow int = ((@page - 1 ) * @pageSize) + 1;
declare @endRow int = @startRow + (@pageSize - 1);


create table #PagingTable(
	rowNum int not null primary key clustered,
	[PurchaseOrderID] [int] NOT NULL,
	[SupplierID] [int] NOT NULL,
	[OrderDate] [date] NOT NULL,
	[DeliveryMethodID] [int] NOT NULL,
	[ContactPersonID] [int] NOT NULL,
	[ExpectedDeliveryDate] [date] NULL,
	[SupplierReference] [nvarchar](20) NULL,
	[IsOrderFinalized] [bit] NOT NULL,
	[Comments] [nvarchar](max) NULL,
	[InternalComments] [nvarchar](max) NULL,
	[LastEditedBy] [int] NOT NULL,
	[LastEditedWhen] [datetime2](7) NOT NULL,
)

insert into #PagingTable(
	rowNum,
	PurchaseOrderID,
	SupplierID,
	OrderDate,
	DeliveryMethodID,
	ContactPersonID,
	ExpectedDeliveryDate,
	SupplierReference,
	IsOrderFinalized,
	Comments,
	InternalComments,
	LastEditedBy,
	LastEditedWhen
)
select
    ROW_NUMBER() over ( order by PurchaseOrderID desc ) as rowNum,
	po.PurchaseOrderID,
	po.SupplierID,
	po.OrderDate,
	po.DeliveryMethodID,
	po.ContactPersonID,
	po.ExpectedDeliveryDate,
	po.SupplierReference,
	po.IsOrderFinalized,
	po.Comments,
	po.InternalComments,
	po.LastEditedBy,
	po.LastEditedWhen
from [Purchasing].[PurchaseOrders] as po

declare @rowCount int = @@RowCount;

select
	po.PurchaseOrderID,
	--po.SupplierID,
	supplier.SupplierName,
	supplier.PhoneNumber,
	po.OrderDate,
	--po.DeliveryMethodID,
	deliveryMethods.DeliveryMethodName,
	--po.ContactPersonID,
	contactPeople.FullName,
	contactPeople.EmailAddress,
	po.ExpectedDeliveryDate,
	po.SupplierReference,
	po.IsOrderFinalized,
	po.Comments,
	po.InternalComments,
	--po.LastEditedBy,
	people.FullName,
	people.EmailAddress,
	po.LastEditedWhen
from #PagingTable as po
--join [Purchasing].[PurchaseOrders] as po on pt.PurchaseOrderID = po.PurchaseOrderID
join [Application].[People] as people on people.PersonID = po.LastEditedBy
join [Purchasing].[Suppliers] as supplier on supplier.SupplierID = po.SupplierID
join [Application].[DeliveryMethods] as DeliveryMethods on deliveryMethods.DeliveryMethodID = po.DeliveryMethodID
join [Application].[People] as contactPeople on contactPeople.PersonID = po.ContactPersonID
where po.rowNum between @startRow and @endRow
order by
	po.rowNum asc;


select
	@startRow as startRow,
	@endRow as endRow,
	@rowCount as rows;

Note that this time the second query is not making use of the PurchaseOrders table at all.   Let’s take a look at the statistics IO.

Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'PurchaseOrders'. Scan count 1, logical reads 19, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(2074 row(s) affected)

(1 row(s) affected)

(20 row(s) affected)
Table 'DeliveryMethods'. Scan count 0, logical reads 40, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Suppliers'. Scan count 0, logical reads 40, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'People'. Scan count 0, logical reads 96, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table '#PagingTable________________________________________________________________________________________________________00000000006D'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

(1 row(s) affected)

The first query hasn’t changed, we’re still getting 19 reads.  The second query however has 3 logical reads on the #PagingTable.  In total that’s 22 reads.   This is significant, it means that we are parsing much less memory on the second query than on the first.   It doesn’t matter if it’s page 1, page 10 or page 20 that we ask for, it’s still 19 reads for the first query and 3 reads for the second.   In total this query batch is now costing  198 logical reads, instead of the original 4326.

SQL Server 2012 syntax

Sql Server 2012 introduced some new functionality that allows paging.  Let’s see how that lines up.  Firstly, here’s the new version of the query, I’ve made it output exactly the same result as before.

declare @page int = 1; --Page 1 is the first page
declare @pageSize int = 20;

-- Variables above this line can 

if OBJECT_ID('tempdb..#PagingTable' ) is not null
	drop table #PagingTable

declare @startRow int = ((@page - 1 ) * @pageSize) + 1;
declare @endRow int = @startRow + (@pageSize - 1);

select
	po.PurchaseOrderID,
	--po.SupplierID,
	supplier.SupplierName,
	supplier.PhoneNumber,
	po.OrderDate,
	--po.DeliveryMethodID,
	deliveryMethods.DeliveryMethodName,
	--po.ContactPersonID,
	contactPeople.FullName,
	contactPeople.EmailAddress,
	po.ExpectedDeliveryDate,
	po.SupplierReference,
	po.IsOrderFinalized,
	po.Comments,
	po.InternalComments,
	--po.LastEditedBy,
	people.FullName,
	people.EmailAddress,
	po.LastEditedWhen
from [Purchasing].[PurchaseOrders] as po
join [Application].[People] as people on people.PersonID = po.LastEditedBy
join [Purchasing].[Suppliers] as supplier on supplier.SupplierID = po.SupplierID
join [Application].[DeliveryMethods] as DeliveryMethods on deliveryMethods.DeliveryMethodID = po.DeliveryMethodID
join [Application].[People] as contactPeople on contactPeople.PersonID = po.ContactPersonID
order by
	PurchaseOrderID desc
offset @startRow rows
fetch next @pageSize rows only;

select
	@startRow as startRow,
	@endRow as endRow,
	count(*) as rows
from [Purchasing].[PurchaseOrders]

This looks much simpler than before.  That’s great for readability.

Here’s the statistics IO

(20 row(s) affected)
Table 'DeliveryMethods'. Scan count 0, logical reads 42, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Suppliers'. Scan count 0, logical reads 42, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'People'. Scan count 0, logical reads 100, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'PurchaseOrders'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

(1 row(s) affected)
Table 'PurchaseOrders'. Scan count 1, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

In total for this query batch there’s 193 logical reads. for the whole query for Page 1

For page 103 here’st the statistics IO

(20 row(s) affected)
Table 'DeliveryMethods'. Scan count 0, logical reads 4122, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Suppliers'. Scan count 0, logical reads 4122, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'People'. Scan count 0, logical reads 8518, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'PurchaseOrders'. Scan count 1, logical reads 19, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

(1 row(s) affected)
Table 'PurchaseOrders'. Scan count 1, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

 

that’s 16787 logical reads.   Or to put it another way, 131Mb of data parsed in memory.

Let’s try to optimise this a bit more.

declare @page int = 103; --Page 1 is the first page
declare @pageSize int = 20;

-- Variables above this line can 

if OBJECT_ID('tempdb..#PagingTable' ) is not null
	drop table #PagingTable

declare @startRow int = ((@page - 1 ) * @pageSize) + 1;
declare @endRow int = @startRow + (@pageSize - 1);


create table #PagingTable(
	rowNum int not null primary key clustered,
	[PurchaseOrderID] [int] NOT NULL
)

insert into #PagingTable(
	rowNum,
	PurchaseOrderID
)
select
    ROW_NUMBER() over ( order by PurchaseOrderID desc ) as rowNum,
	po.PurchaseOrderID
from [Purchasing].[PurchaseOrders] as po
order by PurchaseOrderID desc
offset @startRow rows
fetch next @pageSize rows only;

select
	po.PurchaseOrderID,
	--po.SupplierID,
	supplier.SupplierName,
	supplier.PhoneNumber,
	po.OrderDate,
	--po.DeliveryMethodID,
	deliveryMethods.DeliveryMethodName,
	--po.ContactPersonID,
	contactPeople.FullName,
	contactPeople.EmailAddress,
	po.ExpectedDeliveryDate,
	po.SupplierReference,
	po.IsOrderFinalized,
	po.Comments,
	po.InternalComments,
	--po.LastEditedBy,
	people.FullName,
	people.EmailAddress,
	po.LastEditedWhen
from #PagingTable as pt
join [Purchasing].[PurchaseOrders] as po on pt.PurchaseOrderID = po.PurchaseOrderID
join [Application].[People] as people on people.PersonID = po.LastEditedBy
join [Purchasing].[Suppliers] as supplier on supplier.SupplierID = po.SupplierID
join [Application].[DeliveryMethods] as DeliveryMethods on deliveryMethods.DeliveryMethodID = po.DeliveryMethodID
join [Application].[People] as contactPeople on contactPeople.PersonID = po.ContactPersonID
order by
	pt.rowNum asc;

select
	@startRow as startRow,
	@endRow as endRow,
	count(*) as rows
from [Purchasing].[PurchaseOrders]

And here’s the statistics IO

Table '#PagingTable________________________________________________________________________________________________________000000000074'. Scan count 0, logical reads 41, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'PurchaseOrders'. Scan count 1, logical reads 19, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(20 row(s) affected)

(1 row(s) affected)

(20 row(s) affected)
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'People'. Scan count 0, logical reads 80, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'PurchaseOrders'. Scan count 1, logical reads 19, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'DeliveryMethods'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Suppliers'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table '#PagingTable________________________________________________________________________________________________________000000000074'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

(1 row(s) affected)
Table 'PurchaseOrders'. Scan count 1, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

Notice that this is page 103 that’s being asked for.   In total that’s 166 reads for the entire query batch.  This is much better than the 16787 for the optimized version and better than the 198 for my version without using the OFFSET command.

 

What about using a CTE as well?

declare @page int = 103; --Page 1 is the first page
declare @pageSize int = 20;

-- Variables above this line can 

declare @startRow int = ((@page - 1 ) * @pageSize) + 1;
declare @endRow int = @startRow + (@pageSize - 1);


with PagingTable as (
	select
	    ROW_NUMBER() over ( order by PurchaseOrderID desc ) as rowNum,
		po.PurchaseOrderID
	from [Purchasing].[PurchaseOrders] as po
	order by PurchaseOrderID desc
	offset @startRow rows
	fetch next @pageSize rows only
)
select
	po.PurchaseOrderID,
	--po.SupplierID,
	supplier.SupplierName,
	supplier.PhoneNumber,
	po.OrderDate,
	--po.DeliveryMethodID,
	deliveryMethods.DeliveryMethodName,
	--po.ContactPersonID,
	contactPeople.FullName,
	contactPeople.EmailAddress,
	po.ExpectedDeliveryDate,
	po.SupplierReference,
	po.IsOrderFinalized,
	po.Comments,
	po.InternalComments,
	--po.LastEditedBy,
	people.FullName,
	people.EmailAddress,
	po.LastEditedWhen
from PagingTable as pt
join [Purchasing].[PurchaseOrders] as po on pt.PurchaseOrderID = po.PurchaseOrderID
join [Application].[People] as people on people.PersonID = po.LastEditedBy
join [Purchasing].[Suppliers] as supplier on supplier.SupplierID = po.SupplierID
join [Application].[DeliveryMethods] as DeliveryMethods on deliveryMethods.DeliveryMethodID = po.DeliveryMethodID
join [Application].[People] as contactPeople on contactPeople.PersonID = po.ContactPersonID
order by
	pt.rowNum asc;

select
	@startRow as startRow,
	@endRow as endRow,
	count(*) as rows
from [Purchasing].[PurchaseOrders]

Here’s the statistics IO

(20 row(s) affected)
Table 'DeliveryMethods'. Scan count 0, logical reads 40, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Suppliers'. Scan count 0, logical reads 40, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'People'. Scan count 0, logical reads 96, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'PurchaseOrders'. Scan count 1, logical reads 67, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

(1 row(s) affected)
Table 'PurchaseOrders'. Scan count 1, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

249 logical reads in total.  So overall this isn’t as good as using a Temp table.  This will be because we can place clustering key on the temp table that is used later on to provide the sort order.

 

Finally, what about the technique writing all the information from the purchase orders table into the temporary table?

Here’s the syntax

declare @page int = 103; --Page 1 is the first page
declare @pageSize int = 20;

-- Variables above this line can 

if OBJECT_ID('tempdb..#PagingTable' ) is not null
	drop table #PagingTable

declare @startRow int = ((@page - 1 ) * @pageSize) + 1;
declare @endRow int = @startRow + (@pageSize - 1);

create table #PagingTable(
	rowNum int not null IDENTITY(1,1) primary key clustered,
	[PurchaseOrderID] [int] NOT NULL,
	[SupplierID] [int] NOT NULL,
	[OrderDate] [date] NOT NULL,
	[DeliveryMethodID] [int] NOT NULL,
	[ContactPersonID] [int] NOT NULL,
	[ExpectedDeliveryDate] [date] NULL,
	[SupplierReference] [nvarchar](20) NULL,
	[IsOrderFinalized] [bit] NOT NULL,
	[Comments] [nvarchar](max) NULL,
	[InternalComments] [nvarchar](max) NULL,
	[LastEditedBy] [int] NOT NULL,
	[LastEditedWhen] [datetime2](7) NOT NULL,
)

insert into #PagingTable(
	PurchaseOrderID,
	SupplierID,
	OrderDate,
	DeliveryMethodID,
	ContactPersonID,
	ExpectedDeliveryDate,
	SupplierReference,
	IsOrderFinalized,
	Comments,
	InternalComments,
	LastEditedBy,
	LastEditedWhen
)
select
	po.PurchaseOrderID,
	po.SupplierID,
	po.OrderDate,
	po.DeliveryMethodID,
	po.ContactPersonID,
	po.ExpectedDeliveryDate,
	po.SupplierReference,
	po.IsOrderFinalized,
	po.Comments,
	po.InternalComments,
	po.LastEditedBy,
	po.LastEditedWhen
from [Purchasing].[PurchaseOrders] as po
order by PurchaseOrderID desc
offset @startRow rows
fetch next @pageSize rows only;

select
	po.PurchaseOrderID,
	--po.SupplierID,
	supplier.SupplierName,
	supplier.PhoneNumber,
	po.OrderDate,
	--po.DeliveryMethodID,
	deliveryMethods.DeliveryMethodName,
	--po.ContactPersonID,
	contactPeople.FullName,
	contactPeople.EmailAddress,
	po.ExpectedDeliveryDate,
	po.SupplierReference,
	po.IsOrderFinalized,
	po.Comments,
	po.InternalComments,
	--po.LastEditedBy,
	people.FullName,
	people.EmailAddress,
	po.LastEditedWhen
from #PagingTable as po
--join [Purchasing].[PurchaseOrders] as po on pt.PurchaseOrderID = po.PurchaseOrderID
join [Application].[People] as people on people.PersonID = po.LastEditedBy
join [Purchasing].[Suppliers] as supplier on supplier.SupplierID = po.SupplierID
join [Application].[DeliveryMethods] as DeliveryMethods on deliveryMethods.DeliveryMethodID = po.DeliveryMethodID
join [Application].[People] as contactPeople on contactPeople.PersonID = po.ContactPersonID
order by
	po.rowNum asc;

select
	@startRow as startRow,
	@endRow as endRow,
	count(*) as rows
from [Purchasing].[PurchaseOrders]

Here’s the statistics IO

Table '#PagingTable________________________________________________________________________________________________________000000000098'. Scan count 0, logical reads 41, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'PurchaseOrders'. Scan count 1, logical reads 19, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(20 row(s) affected)

(1 row(s) affected)

(20 row(s) affected)
Table 'DeliveryMethods'. Scan count 0, logical reads 40, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Suppliers'. Scan count 0, logical reads 40, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'People'. Scan count 0, logical reads 80, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table '#PagingTable________________________________________________________________________________________________________000000000098'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

(1 row(s) affected)
Table 'PurchaseOrders'. Scan count 1, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

In total this query used 223 logical reads.  This is better yet.

 

conclusion

The OFFSET and FETCH NEXT n ROWS ONLY commands are a useful addition to the T-SQL Language.  However to get the best benefit from it, it is still worth splitting the query into two parts, one to find the page, and the second to get the rows from the page.

This is the code pattern that I will be following in future for writing code that includes paging.

declare @page int = 103; --Page 1 is the first page
declare @pageSize int = 20;

-- Variables above this line can 

if OBJECT_ID('tempdb..#PagingTable' ) is not null
	drop table #PagingTable

declare @startRow int = ((@page - 1 ) * @pageSize) + 1;
declare @endRow int = @startRow + (@pageSize - 1);

create table #PagingTable(
	rowNum int not null IDENTITY(1,1) primary key clustered,
	[PurchaseOrderID] [int] NOT NULL,
	[SupplierID] [int] NOT NULL,
	[OrderDate] [date] NOT NULL,
	[DeliveryMethodID] [int] NOT NULL,
	[ContactPersonID] [int] NOT NULL,
	[ExpectedDeliveryDate] [date] NULL,
	[SupplierReference] [nvarchar](20) NULL,
	[IsOrderFinalized] [bit] NOT NULL,
	[Comments] [nvarchar](max) NULL,
	[InternalComments] [nvarchar](max) NULL,
	[LastEditedBy] [int] NOT NULL,
	[LastEditedWhen] [datetime2](7) NOT NULL,
)

insert into #PagingTable(
	PurchaseOrderID,
	SupplierID,
	OrderDate,
	DeliveryMethodID,
	ContactPersonID,
	ExpectedDeliveryDate,
	SupplierReference,
	IsOrderFinalized,
	Comments,
	InternalComments,
	LastEditedBy,
	LastEditedWhen
)
select
	po.PurchaseOrderID,
	po.SupplierID,
	po.OrderDate,
	po.DeliveryMethodID,
	po.ContactPersonID,
	po.ExpectedDeliveryDate,
	po.SupplierReference,
	po.IsOrderFinalized,
	po.Comments,
	po.InternalComments,
	po.LastEditedBy,
	po.LastEditedWhen
from [Purchasing].[PurchaseOrders] as po
order by PurchaseOrderID desc
offset @startRow rows
fetch next @pageSize rows only;

select
	po.PurchaseOrderID,
	--po.SupplierID,
	supplier.SupplierName,
	supplier.PhoneNumber,
	po.OrderDate,
	--po.DeliveryMethodID,
	deliveryMethods.DeliveryMethodName,
	--po.ContactPersonID,
	contactPeople.FullName,
	contactPeople.EmailAddress,
	po.ExpectedDeliveryDate,
	po.SupplierReference,
	po.IsOrderFinalized,
	po.Comments,
	po.InternalComments,
	--po.LastEditedBy,
	people.FullName,
	people.EmailAddress,
	po.LastEditedWhen
from #PagingTable as po
--join [Purchasing].[PurchaseOrders] as po on pt.PurchaseOrderID = po.PurchaseOrderID
join [Application].[People] as people on people.PersonID = po.LastEditedBy
join [Purchasing].[Suppliers] as supplier on supplier.SupplierID = po.SupplierID
join [Application].[DeliveryMethods] as DeliveryMethods on deliveryMethods.DeliveryMethodID = po.DeliveryMethodID
join [Application].[People] as contactPeople on contactPeople.PersonID = po.ContactPersonID
order by
	po.rowNum asc;

select
	@startRow as startRow,
	@endRow as endRow,
	count(*) as rows
from [Purchasing].[PurchaseOrders]

 

Continue to part two to build on this

Leave a Reply

Your email address will not be published. Required fields are marked *