A couple of days back, I was asked how would I use SQL grouping functions to split huge data load into separate batches. Below is the code I came up with. The next logical step would be to load the statements into a temp table, iterate through it and execute the statements with sp_executesql. It is needed to be said, that if you have big gaps of missing IDs in the PK you are scanning, this might not be the best and most accurate solution.
USE [AdventureWorks2012]; GO DECLARE @BATCHSIZE INT = 10000; /*CHECK THE MIN AND MAX IDs IN THE SOURCE TABLE*/ SELECT MIN(SalesOrderID) MinID,MAX(SalesOrderID) MaxID FROM [Sales].[SalesOrderHeader]; /**/ SELECT --i.PartitionID, --MIN(i.SalesOrderID) MinID, --MAX(i.SalesOrderID) MaxID, 'INSERT INTO TARGET(..) SELECT COL1,COL2,.. FROM SOURCE WITH (NOLOCK) WHERE SalesOrderID BETWEEN '+ CAST(MIN(i.SalesOrderID) AS VARCHAR(10)) + ' AND ' + CAST(MAX(i.SalesOrderID) AS VARCHAR(10)) + '; ' AS STMT FROM ( SELECT SalesOrderID, SalesOrderID / @BATCHSIZE PartitionID FROM [Sales].[SalesOrderHeader] WITH (NOLOCK) ) i GROUP BY i.PartitionID --ORDER BY i.PartitionID;