TSQL Large data loads split by a declared batch size

A couple of days back, I was asked how would I use SQL grouping functions to split huge data load into separate batches. Below is the code I came up with. The next logical step would be to load the statements into a temp table, iterate through it and execute the statements with sp_executesql. It is needed to be said, that if you have big gaps of missing IDs in the PK you are scanning, this might not be the best and most accurate solution.


USE [AdventureWorks2012];
GO

DECLARE @BATCHSIZE INT = 10000;

/*CHECK THE MIN AND MAX IDs IN THE SOURCE TABLE*/
SELECT MIN(SalesOrderID) MinID,MAX(SalesOrderID) MaxID FROM [Sales].[SalesOrderHeader];
/**/

SELECT
--i.PartitionID,
--MIN(i.SalesOrderID) MinID,
--MAX(i.SalesOrderID) MaxID,
'INSERT INTO TARGET(..) SELECT COL1,COL2,.. FROM SOURCE WITH (NOLOCK) WHERE SalesOrderID BETWEEN '+
CAST(MIN(i.SalesOrderID) AS VARCHAR(10)) + ' AND ' +
CAST(MAX(i.SalesOrderID) AS VARCHAR(10)) + '; '
AS STMT
FROM
(
	SELECT
	SalesOrderID,
	SalesOrderID / @BATCHSIZE PartitionID
	FROM [Sales].[SalesOrderHeader] WITH (NOLOCK)
) i
GROUP BY i.PartitionID
--ORDER BY i.PartitionID;

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s