TSQL code to see all cols in a DB containing only NULLs

A colleague of mine lately asked me if I know about a quick way, how to get a list of all the columns in a Database, that contain only NULLs. Well, here’s a script I came up with. I am pretty sure, there are other ( even better ) options out there, but for some reason I liked this so I decided to share the code here.

BTW. Posting MSSQL logger values inside Elasticsearch using Python blogpost is coming soon, so stay tuned ūüôā

 

DECLARE @ID int;
DECLARE @query1 nvarchar(MAX);
DECLARE @TableName nvarchar(255);
DECLARE @ColumnName nvarchar(255);
DECLARE @SchemaName nvarchar(255);

USE AdventureWorksDW2014; --USE THE DB YOU ARE INTERESTED IN
SET NOCOUNT ON;

IF OBJECT_ID('tempdb..#tmp_counts') IS NOT NULL
DROP TABLE #tmp_counts;

IF OBJECT_ID('tempdb..#RESULTSET') IS NOT NULL
DROP TABLE #RESULTSET;

CREATE TABLE #tmp_counts (
ID int IDENTITY (1,1) PRIMARY KEY,
SchemaName nvarchar(255),
TableName nvarchar(255),
ColumnName nvarchar(255));

CREATE TABLE #RESULTSET (
SchemaName nvarchar(255),
TableName NVARCHAR(255),
ColumnName NVARCHAR(255),
[CountOfRows] int,
[CountOfRows_NULL] int);

INSERT INTO #tmp_counts
SELECT
s.name [Schema],
t.name [TableName],
c.name [ColumnName]
FROM sys.tables t
INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
INNER JOIN sys.columns c ON c.object_id = t.object_id
WHERE c.Is_Nullable = 1
ORDER BY 1,2;
WHILE EXISTS (SELECT * FROM #tmp_counts)

BEGIN

SELECT
TOP(1) @ID = ID,
@SchemaName = Schemaname,
@TableName = TableName,
@ColumnName = ColumnName
FROM #tmp_counts
ORDER BY ID;

SELECT @query1 =
'INSERT INTO #RESULTSET  SELECT '''+@SchemaName+''','''+ @TableName+ ''' AS TableName,'''+@ColumnName+ ''' AS ColumnName,
(SELECT COUNT(*) AS CountOfRows FROM ['+@SchemaName+'].['+@TableName+'] WITH (NOLOCK)),
(SELECT COUNT(*) AS CountOfRows_NULL FROM ['+@SchemaName+'].['+@TableName+'] WITH (NOLOCK) WHERE ['+@ColumnName+'] IS NULL);';

EXEC (@query1);

DELETE FROM #tmp_counts WHERE ID = @ID;
END

--YOUR FINAL QUERY IS HERE:
SELECT 'Column with all rows NULL' [Check Description], *
FROM #RESULTSET
WHERE CountOfRows > 0 AND
CountOfRows = CountOfRows_NULL;

Advertisements

Posting SQL Server ETL Logging and Monitoring table data into ElasticSearch using Python part 1 of 3

Lately I’ve been working on a very specific project. This concept should be able to pass our ETL logging and monitoring MSSQL table data into ElasticSearch, allow the TPM to build Kibana dashboards on top and raise events that will go into PagerDuty. In this series, I will focus on the first part, that is the push of the ETL Logging and Monitoring table data into ElasticSearch document DB using a Python script. The kudos here goes to a colleague of mine, who helped me out with the Python code as I am still a Python student. BTW if you’re interested in Learning Python, here is a link to a great book online. ( Personally I like to play around with Python Data Science projects the most, hope to write a post or two on this topic as soon as possible )

This first blogpost will handle the installation of the MSSQL table that could be used for logging.

Next coming will be part 2 of this series, handling the Python code. In the last part, I would like to wrap up this project.

So here is an installation script for inspiration with some dummy data inserts:


USE [master]
GO

CREATE DATABASE [Logger]
GO

USE [Logger]
GO

CREATE SCHEMA [LogMon]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [LogMon].[LogMonitor](
[LogMonitor_ID] [int] IDENTITY(1,1) NOT NULL,
[Execution_UTC_Datetime] [datetime] NULL,
[Source_File_Processed_Datetime] [datetime] NULL,
[Source_FileName_Timestamp_Date] [date] NULL,
[Source_FileName] [nvarchar](255) NULL,
[Customer_Name] [nvarchar](255) NULL,
[Data_Feed_Type] [nvarchar](255) NULL,
[Server_Name] [nvarchar](255) NULL,
[Stage_Name] [nvarchar](255) NULL,
[Target_Name] [nvarchar](255) NULL,
[Check_Name] [nvarchar](255) NULL,
[Check_Value] [int] NULL,
[Moved_To_ES] [bit] NULL,
PRIMARY KEY CLUSTERED( [LogMonitor_ID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]
GO

ALTER TABLE [LogMon].[LogMonitor] ADD DEFAULT ((0)) FOR [Moved_To_ES]
GO

INSERT INTO [LogMon].[LogMonitor]
([Execution_UTC_Datetime],
 [Source_File_Processed_Datetime],
 [Source_FileName_Timestamp_Date],
 [Source_FileName],
 [Customer_Name],
 [Data_Feed_Type],
 [Server_Name],
 [Stage_Name],
 [Target_Name],
 [Check_Name],
 [Check_Value])
SELECT i.*
FROM (
   SELECT
     GETDATE() [Execution_UTC_Datetime],
     DATEADD(DAY,-1,GETDATE()) [Source_File_Processed_Datetime],
     CAST(DATEADD(DAY,-1,GETDATE()) AS DATE) [Source_FileName_Timestamp_Date],
     'Entities_'+CAST(CAST(DATEADD(DAY,-1,GETDATE()) AS DATE) AS VARCHAR(10))+'.txt' [Source_FileName],
     'Customer_Name_A' [Customer_Name],
     'Entities' [Data_Feed_Type],
     @@SERVERNAME [Server_Name],
     'Stage1' [Stage_Name],
     'dbo.Entity' [Target_Name],
     'Created Rows' [Check_Name],
     ABS(CHECKSUM(NewId())) % 1000 [Check_Value]
   UNION ALL
   SELECT
     GETDATE(),
     DATEADD(DAY,-1,GETDATE()),
     CAST(DATEADD(DAY,-1,GETDATE()) AS DATE),
     'Entities_'+CAST(CAST(DATEADD(DAY,-1,GETDATE()) AS DATE) AS VARCHAR(10))+'.txt',
     'Customer_Name_A',
     'Entities',
     @@SERVERNAME,
     'Stage1',
     'dbo.Entity',
     'Modified Rows',
     ABS(CHECKSUM(NewId())) % 1000
   UNION ALL
   SELECT
     GETDATE(),
DATEADD(DAY,-2,GETDATE()),
CAST(DATEADD(DAY,-2,GETDATE()) AS DATE),
     'Contacts_'+CAST(CAST(DATEADD(DAY,-2,GETDATE()) AS DATE) AS VARCHAR(10))+'.csv',
     'Customer_Name_B',
     'Contacts',
     @@SERVERNAME,
     'Stage2',
     'dbo.Contact',
     'Created Rows',
     ABS(CHECKSUM(NewId())) % 1000
   UNION ALL
   SELECT
     GETDATE(),
DATEADD(DAY,-2,GETDATE()),
CAST(DATEADD(DAY,-2,GETDATE()) AS DATE),
     'Contacts_'+CAST(CAST(DATEADD(DAY,-2,GETDATE()) AS DATE) AS VARCHAR(10))+'.csv',
     'Customer_Name_B',
     'Contacts',
     @@SERVERNAME,
     'Stage2',
     'dbo.Contact',
     'Modified Rows',
     ABS(CHECKSUM(NewId())) % 1000
) i
GO

And the dummy data that shall be passed to ElasticSearch should look like this:

sampletable

 

TSQL histogram

Sometimes I like to fiddle around with TSQL. Not sure how useful this trick might be, but here is a code I came up with, that delivers histograms based on your data and a few variables you define. So as you can see, the dataset for this script is quite known Adventure Works DW dbo.factInternetSales . You can define the bucket count and the Bar chart width variables to fine tune your output. This histogram splits the dataset into the declared buckets based on the ProductKey FK column values.

SET NOCOUNT ON;

USE AdventureWorksDW2014;

DECLARE @BucketCount DECIMAL(8,3) = 10;
DECLARE @BarChartWidth INT = 100;
DECLARE @iKeyCount DECIMAL(8,3) = (SELECT COUNT(DISTINCT ProductKey) FROM FactInternetSales);
DECLARE @iBucketSize DECIMAL(8,3) = @iKeyCount / @BucketCount;

SELECT
i.*,
REPLICATE('=', @BarChartWidth * i.SumSalesAmount / (SELECT SUM(SalesAmount) FROM factInternetSales)) AS [BarChart]
FROM (
     SELECT
     ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) Bucket_ID,
     MIN(ii.ProductKey) Bucket_Range_From,
     MAX(ii.ProductKey) Bucket_Range_To,
     COUNT(ii.Bucket_ProductKeys_Count) Bucket_ProductKeys_Count,
     SUM(ii.SumSalesAmount) SumSalesAmount
     FROM (
          SELECT
          ProductKey,
          CEILING(CAST((ROW_NUMBER() OVER (ORDER BY ProductKey) )/@iBucketSize AS DECIMAL(8,3))) iBucket_ID,
          COUNT(*) Bucket_ProductKeys_Count,
          SUM(SalesAmount) SumSalesAmount
          FROM FactInternetSales
          GROUP BY ProductKey
          ) ii
      GROUP BY ii.iBucket_ID
      ) i;

And the result may look like this:

histogram1

But I would also like to see more accurate solution. So digging deeper, I came up with a code, that splits the dataset into the declared buckets based on the composite PK SalesOrderNumber, SalesOrderLineNumber. ( I also added precision to the decimal Datatype ) This code still feels quite straightforward to me, but gets a little bit more complex.

SET NOCOUNT ON;

USE AdventureWorksDW2014;

DECLARE @BarChartWidth INT = 100;
DECLARE @BucketCount DECIMAL(38,18) = 10;
DECLARE @iCount DECIMAL(38,18) = (SELECT COUNT(*) FROM FactInternetSales);
DECLARE @iBucketSize DECIMAL(38,18) = @iCount / @BucketCount;

;WITH CTE AS
(
     SELECT
     i.iKey,
     i.iID,
     i.iBucket_ID,
     i.SumSalesAmount
     FROM (
          SELECT
          TOP 100 PERCENT
          CAST([SalesOrderNumber] AS VARCHAR) + '-' + CAST([SalesOrderLineNumber] AS VARCHAR) iKey,
          ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) iID,
          CEILING(CAST((ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) ) / @iBucketSize AS DECIMAL(38,18))) iBucket_ID,
          [SalesOrderNumber],
          [SalesOrderLineNumber],
          SUM(SalesAmount) SumSalesAmount
          FROM FactInternetSales
          GROUP BY [SalesOrderNumber],[SalesOrderLineNumber]
          ORDER BY CAST(REPLACE([SalesOrderNumber],'SO','') AS INT),[SalesOrderLineNumber]
          ) i
)

SELECT
i.iBucket_ID,
i.Bucket_Range_From,
i.Bucket_Range_To,
(SELECT Cfrom.iKey FROM CTE Cfrom WHERE Cfrom.iID = i.Bucket_Range_From) Bucket_Range_SalesOrderKey_From,
(SELECT Cto.iKey FROM CTE Cto WHERE Cto.iID = i.Bucket_Range_To) Bucket_Range_SalesOrderKey_To,
i.Bucket_Row_Count,
i.Bucket_Sales_Amount,
REPLICATE('=', @BarChartWidth * i.Bucket_Sales_Amount / (SELECT SUM(SalesAmount) FROM factInternetSales)) AS [BarChart]
FROM (
     SELECT
     TOP 100 PERCENT
     ii.iBucket_ID,
     MIN(ii.iID) Bucket_Range_From,
     MAX(ii.iID) Bucket_Range_To,
     COUNT(ii.iID) Bucket_Row_Count,
     SUM(ii.SumSalesAmount) Bucket_Sales_Amount
     FROM
     CTE ii
     GROUP BY ii.iBucket_ID
     ORDER BY ii.iBucket_ID
     ) i
ORDER BY iBucket_ID;

The final result is as expected ( check the Bucket_Row_Count column ) and can look like this:

histogram2

 

Compare multiple DB schemas

Sometimes you may need to do a quick examination on how are the DBs, that are supposed to have the same schema, different. This quick query allows you to seek for the outlying and the intersecting columns across multiple databases. If you are looking only for the outlying columns and in which DBs they are, the code below is to be used. If you are looking for the intersection of multiple DBs, just switch from HAVING COUNT(0) <> to HAVING COUNT(0) = and remove the list of databases that the columns are in..

In my example, I created 3 DBs Test 1-3 with some differences between the schemas. The key determining a unique column is defined in the Compound_key column.

schema_compare

IF OBJECT_ID('tempdb..#DB_Schema_Compare') IS NOT NULL
DROP TABLE #DB_Schema_Compare;
CREATE TABLE #DB_Schema_Compare
( [DB_Name] NVARCHAR(100),
[Compound_Key] NVARCHAR(Max)
)

USE Test3;

INSERT INTO #DB_Schema_Compare
SELECT
'Test3',
--s.name SchemaName,
--t.name TableName,
--c.name ColumnName,
--c.is_computed ColumnIsComputed,
--c.is_identity ColumnIsIdentity,
--c.is_nullable ColumnIsNullable,
--c.collation_name ColumnCollationName,
--c.max_length ColumnMaxLength,
CAST(
CAST(s.name AS VARCHAR(200)) + '^' +
CAST(t.name AS VARCHAR(200)) + '^' +
CAST(c.name AS VARCHAR(200)) + '^' +
CAST(c.is_computed AS VARCHAR(1)) + '^' +
CAST(c.is_identity AS VARCHAR(1)) + '^' +
CAST(c.is_nullable AS VARCHAR(1)) + '^' +
ISNULL(CAST(c.collation_name AS VARCHAR(200)) + '^','^') +
CAST(c.max_length AS VARCHAR(10)) + '^' +
CAST(ty.name AS VARCHAR(200))
AS VARCHAR(MAX)) CompoundKey
FROM sys.columns c
inner join sys.tables t on c.object_id = t.object_id
inner join sys.schemas s on s.schema_id = t.schema_id
inner join sys.types ty on ty.system_type_id = c.system_type_id;

SELECT
i.*,
STUFF(( SELECT ',' + SUB.[DB_Name] AS [text()]
FROM
#DB_Schema_Compare SUB
WHERE
SUB.Compound_Key = i.Compound_Key
ORDER BY SUB.[DB_Name]
FOR XML PATH('')
), 1, 1, '' )
AS [DBs_we_HAVE_this_compound_key_in]
FROM (
     SELECT
     COUNT(0) CNT,
     Compound_Key
     FROM
     #DB_Schema_Compare
     GROUP BY [Compound_Key]
     HAVING COUNT(0) <> (SELECT COUNT(DISTINCT [DB_Name]) FROM #DB_Schema_Compare)
) i;

TSQL Large data loads split by a declared batch size

A couple of days back, I was asked how would I use SQL grouping functions to split huge data load into separate batches. Below is the code I came up with. The next logical step would be to load the statements into a temp table, iterate through it and execute the statements with sp_executesql. It is needed to be said, that if you have big gaps of missing IDs in the PK you are scanning, this might not be the best and most accurate solution.


USE [AdventureWorks2012];
GO

DECLARE @BATCHSIZE INT = 10000;

/*CHECK THE MIN AND MAX IDs IN THE SOURCE TABLE*/
SELECT MIN(SalesOrderID) MinID,MAX(SalesOrderID) MaxID FROM [Sales].[SalesOrderHeader];
/**/

SELECT
--i.PartitionID,
--MIN(i.SalesOrderID) MinID,
--MAX(i.SalesOrderID) MaxID,
'INSERT INTO TARGET(..) SELECT COL1,COL2,.. FROM SOURCE WITH (NOLOCK) WHERE SalesOrderID BETWEEN '+
CAST(MIN(i.SalesOrderID) AS VARCHAR(10)) + ' AND ' +
CAST(MAX(i.SalesOrderID) AS VARCHAR(10)) + '; '
AS STMT
FROM
(
	SELECT
	SalesOrderID,
	SalesOrderID / @BATCHSIZE PartitionID
	FROM [Sales].[SalesOrderHeader] WITH (NOLOCK)
) i
GROUP BY i.PartitionID
--ORDER BY i.PartitionID;

sp_testlinkedserver in a try…catch block

Sometimes you definitely need to go with quick workarounds. I am pretty much sure I am not the only BI developer working from time to time with legacy and somehow wacky old¬†code used for production purposes. This time I came across a legacy scheduled stored procedure filling a dataset for SSRS reporting purposes calling openrowset to run MDX query against an OLAP cube but the linked server was failing from time to time because of the weak connections. Whenever the linked server call would fail, there would be simply¬†no reporting as the MDX results were later used in an INNER JOIN ūüôā . I kind of wonder what did the people writing this code thought back then. Anyway I needed this SP to stop failing and to have results even if the linked server connection would fail and to be informed that the linked server call failed so I could react and persist the results from the last successful run.

The solution is quite easy and so far seems bullet proof. Lets use this sample MDX code enveloped in an openrowset for example:

SELECT a.*
FROM OpenRowset( 'MSOLAP','DATASOURCE=myOlapServer; Initial Catalog=FoodMart;',
'SELECT Measures.members ON ROWS,
[Product Category].members ON COLUMNS
FROM [Sales]')
as a
GO

So the trick is to add this chunk of code after sp_testlinkedserver which tests if we are able to connect to the specified linked server and we need to run this together in the try block. Also we might want to set the variable @err to know that an error happened. The code could look something like this:


DECLARE @err BIT = 0;

BEGIN TRY
EXEC sp_testlinkedserver N'MSOLAP';

SELECT a.*
INTO #results
FROM OpenRowset( 'MSOLAP','DATASOURCE=myOlapServer; Initial Catalog=FoodMart;',
'SELECT Measures.members ON ROWS,
[Product Category].members ON COLUMNS
FROM [Sales]' )
as a;

END TRY

BEGIN CATCH
SET @err = 1;
END CATCH

IF @err = 1
BEGIN
--FOR EXAMPLE USE AND PERSIST RESULTS FROM THE LAST TIME THE SP WAS SUCCESSFULLY EXECUTED..
END

Disclaimer: This is just a quick fix tutorial, I definitely agree this is not the best example of using
the try…catch block. Details on sp_testlinkedserver can be easily found here.

Build your own Data quality framework part 3/3

And here we are , at the final part of this tutorial. This third part is going to be mainly about the Data quality issue fixing and possible reporting layers. I have been working with this framework in an environment, where we’ve had SSRS reports embedded in a .NET web application. The links in the SSRS subreports for each Data quality rule would navigate you onto the specific business objects in the application and there you would fix the errors. That’s the part where I find this solution really straightforward and powerful. If you’re in a completely different setup environment, you can possibly write your own simple web page for the error fixing module or you can come up with any other possible UI you can think of. The same with the reporting module. I have used a simple SSAS Tabular model with Excel sheets pushed onto the internal Sharepoint site, but you can really go in any possible direction you can think of.

So once you have the DQ schema tables filled with rules and their results, you can easily create an SSRS report dashboard  ( dataset joining dql.ValidationResult and dql.ValidationRun tables ) looking something like this for example:

Untitled Diagram (4)

The datasets for this report are a piece of cake to come up with when you think about the data model we have from the second part of this tutorial. You can use for example:


DECLARE @CurrValidationRun_ID INT;
SELECT @CurrValidationRun_ID = ISNULL(MAX(ValidationRun_ID),0) FROM dql.ValidationRun WHERE ValidationRunEnd IS NOT NULL;
DECLARE @PrevValidationRun_ID INT = @CurrValidationRun_ID - 1;

SELECT
VR.ValidationRule_ID,
VR.ValidationRuleName,
(SELECT VRP.ValidationErrorCount FROM dql.ValidationResult VRP WHERE VRP.ValidationRun_ID = @PrevValidationRun_ID AND VRP.ValidationRule_ID = VR.ValidationRule_ID) AS PrevErrorCount,
(SELECT VRP.ValidationErrorCount FROM dql.ValidationResult VRC WHERE VRC.ValidationRun_ID = @CurrValidationRun_ID AND VRC.ValidationRule_ID = VR.ValidationRule_ID) AS CurrentErrorCount
FROM dql.ValidationRule VR;

or you can use CROSS APPLY or what ever technique you are the most comfortable with. The Data quality rule column in the table should be a link to a subreport containing the specific rule errors with links to the CRM application business objects.

When it comes to the¬†drill down into the subreports, you need to query the dql.ValidationResultItem table but you are facing one issue here. As you’ve seen, I have used 6 ResultCol columns for the results and in the Contact with multiple invoicing addresses example you have 6 columns filled with data. But what if you had used less than 6 columns in some rules? So¬†how can¬†you prepare the report column layout¬†in the specific rule results subreport? The same problem is with the SSRS matrix / table column headers. I leave this part up to your imagination.

You can go with a simplistic approach, show all 6 columns in the table report layout and have the column visibility and column header value based on some Data quality rule Рspecific configuration and solve this issue with an lookup expression. You could also choose a more technical approach and build dynamic SQL statements using the Data quality query metadata and prepare a dataset for a matrix style report with some pivoting / unpivoting. I would not go into more details here as everyone is working in an different environment and this is really the fun part where you can spend some time evaluating the best available options. I leave the reporting for Management up to you as well. It is really the fun part where sky is the limit and the solution is totally based on your preferences.

Conclusion:

This lightweight Data quality framework can be considered pretty easy to build yet very powerful. You can further expand the functionality in many ways. You can even make this framework call available Datasets used for example for addresses cleaning and have some auto corrections being done. You should not forget about row-level security , so the users cannot touch each others owned business entities. But the main thought behind this is to provide the users an easy to use interface through which they can easily fix the Data quality issues that they are responsible for. Just remember to keep it simple and well performing!

Build your own Data quality framework part 2/3

The database back-end part of this simple framework will be made of these parts:

schema dql –¬†containing all the Data quality objects mentioned below

table dql.ValidationRule – containing a list of Data quality rules with their meta information

table dql.ValidationRun – containing information about the validation runs

table dql.ValidationResult Рcontaining aggregated information with the results needed for your reporting

table dql.ValidationRuleItem – containing the lowest level info related to the business object specific errors, ie. a list of the Contacts having more than 1 invoicing address filled in SQL_VARIANT datatype columns

stored procedure dql.ValidationRunner Рresponsible for the Data quality rules evaluation, maintenance of data and aggregations for reporting

stored procedure dql.ValidationRuleAPI Рresponsible for the Data quality rules management

SQL Agent job DataQuality Рresponsible for scheduled execution of the stored procedure dql.ValidationRunner

Here is a relational schema to give you a better idea:

Untitled Diagram (1) (1)

And how do we bring the Data Quality evaluation to life?  First we need to fill the Data quality rules and their meta information into the dql.ValidationRule table.

INSERT INTO dql.ValidationRule
( ValidationRuleName
, ValidationRuleDescription
, ValidationRuleQuery
, ValidationRuleColumns
, IsActive )

SELECT
'Contacts with multiple invoicing Address',
'List of all contacts with their owners having multiple invoicing addresses in the CRM',
'SELECT Contact_ID, ContactName, ContactOwner_ID, ContactOwnerName, AddressType, COUNT(*) AS [InvoicingAddressCount]
FROM dbo.Contact WITH (NOLOCK)
WHERE AddressType = ''Invoicing''
GROUP BY Contact_ID, ContactName, ContactOwner_ID, ContactOwnerName, AddressType
HAVING COUNT(*) > 1',
'Contact_ID,ContactName,ContactOwner_ID,ContactOwnerName,AddressType,InvoicingAddressCount',
1 UNION ...

At this phase you can build your simple stored procedure API. Just prepare a stored procedure called dql.ValidationRuleAPI that can either insert, update or delete a Data quality rule based on the provided parameters.

Next you need to create the stored procedure dql.ValidationRunner which will take care of the main part. It has to create a row in the dql.ValidationRun table to mark the validation run has started. Then it needs to iterate through the table dql.ValidationRule, use the value from the ValidationRuleQuery column, build a dynamic SQL statement and fill the result table dql.ValidationResultItem with each Data quality query results. ( No need for cursors here, you can refer to this link to use much more lightweight solution using a temp table )


DECLARE @statement NVARCHAR(MAX);
SET @statement = N'YOUR DATA QUALITY QUERY SELECT.. GOES HERE';

INSERT INTO dql.ValidationResultItem(
ValidationRun_ID ,
ValidationRule_ID ,
ResultCol1 ,
ResultCol2 ,
ResultCol3 ,
ResultCol4 ,
ResultCol5 ,
ResultCol6 )

EXEC sp_executesql @statement;

**Note: In case you’ll have different result column count for each query, you need to generate the¬†INSERT INTO…¬†statement also in a dynamic SQL manner. You can use some¬†simple logic based on the column count metadata for the¬†specific rule. The result would look like EXEC sp_executesql @statement where @statement is set something like :¬†‘INSERT INTO dql.ValidationResultItem (…) SELECT DATA QUALITY QUERY SELECT GOES HERE’…

After each rule being finished, this SP needs also to write the number of found results for the rule into the dql.ValidationResult table. Once all the rules are finished and all the results are inserted into the result table, this SP needs to mark the end of the validation run back into the dql.ValidationRun table and do some maintenance so the result table does not grow out of control. This table could easily become really huge, being considered the job is scheduled once in a hour, evaluating in average 50 rules with about approx. 100 results per rule. At any cost avoid using DELETE statement here, use TRUNCATE. I have spilled the results for the last 2 runs into a temp db table, truncated the result item table and inserted the spilled results back. I am pretty much sure you can come up with a more elegant solution.

Here is a simple UML diagram explaining the dql.ValidationRunner stored procedure workflow.

Untitled Diagram3.png

One important note. DO make sure, all the Data quality queries have NOLOCK hints. You don’t want to stop your production CRM application with your queries. Also I strongly advise¬†to have the framework run on a different server then your CRM database production.

Build your own Data quality framework part 1/3

This time I would like to talk about building your own simple Data quality framework using T-SQL. This framework will be implemented as a SQL Agent Job evaluating the Data quality queries against your clients CRM Database and persisting the results from the last 2 runs in a separate DB schema with some front end for Data quality issues fixing and reporting. This solution will also be generic ( not dependent on column names ) so you can combine all kinds of Data quality rules the business can think of.

This tutorial will have 3 parts. In the first part, I would like to give an high perspective overview of what we are building and how this Data quality tool can work. The second part will focus on a high level overview of DB back-end steps needed and the third part will focus on the Data quality fixing and reporting part.

Let me start with saying, that Data quality is absolutely critical when it comes to a group of people entering data into a system. I have a lot of experience from an environment, where there was an internal¬†CRM / ERP system being developed. Issues in Data quality would really be business-stoppers back there ( and probably anywhere else ). Imagine that you would like to invoice your client, but someone from the back-office team had¬†inserted incomplete address or you have multiple addresses assigned to the client and you don’t know which one should be¬†used for invoicing. And trust¬†me, managers who cannot invoice their clients are not the right people to mess with. I am pretty sure that you can come up with a plenty of issues¬†like this.

With the solution I am about to present, you don’t have to buy an pricey 3rd party application to solve these kinds of issues. So lets go through the possible scenarios.

1.Business Analyst knows exactly what is needed to ensure the required data quality at the clients side and manages these rules written in SQL language in an API he is provided with. This API could be implemented as a simple stored procedure used for insert / delete / update based on the parameters provided.

2.CRM Manager comes up with rules needed to have the required data quality in the client application. Let’s suppose this person does not know how to write SQL code, so these rules are handed over to the Business Analyst¬†as¬†verbally described definitions ( i.e. 1 contact can have multiple addresses but only one address can be used for invoicing ) and the Business Analyst translates these rules into the SQL language for example like this:


SELECT Contact_ID, ContactName, ContactOwner_ID, ContactOwnerName, AddressType, COUNT(*) AS [InvoicingAddressCount]
FROM dbo.Contact WITH (NOLOCK)
WHERE AddressType = 'Invoicing'
GROUP BY Contact_ID, ContactName, ContactOwner_ID, ContactOwnerName, AddressType
HAVING COUNT(*) > 1

and passes them into the API.

3.You could also have the DB developer come up with these rules and implementing them. ( definitely not my preferred scenario )

Untitled Diagram

4.Once the rules are in place, the DB job is scheduled and started and the rules are already being evaluated against the CRM Database, there will be a need for 2 front ends on the other side. One front end will be used for the Data quality issues fixing by the CRM users and the second front end will be used for reporting to Management. Seeing how well do your employees manage their business objects Data quality is an interesting KPI.

Next time, I will talk more about the DB project implementation.

Rebuild / reorganize those indexes

Lately I brought back to life a script that I would like to briefly talk about. It is made of dynamic TSQL code which I find pretty neat and really useful if it preceeds the OLAP processing tasks ( ideally together in a SSIS package with a consecutive statistics update task using sp_updatestats ) .

I am not going to share the whole thing, but I will give some hints how to build this code for yourself.

In the declaration part, you set the variables with the fragmentation thresholds which determine if the index is to be rebuild or just reorganized and the minimal index page count. Just a hint here, we have this currently setup as 14 percent threshold to reorganize, 35 to rebuild and 500 as the minimal page count.

Next step is to create a temp table which will hold the index names and the dynamically generated rebuild / reorganize statements. After this table is filled, you iterate through the table and execute the statement per each row.

The main magic however happens using this query:

SELECT
b.name as key_name
,a.avg_fragmentation_in_percent
,a.page_count, c.name as Tab_name
,d.name as Schema_name
,a.partition_number as Partition_Number
,'ALTER INDEX [' + b.name + '] ON [MyDB].[' + d.name + '].[' + c.name + '] '
+ CASE
WHEN a.avg_fragmentation_in_percent BETWEEN @reorg_pct AND @rebuild_pct
THEN 'REORGANIZE'
WHEN a.avg_fragmentation_in_percent > @rebuild_pct
THEN 'REBUILD'
END
+ CASE
WHEN a.partition_number > 1
THEN ' PARTITION = ' + CAST(a.partition_number AS NVARCHAR)
ELSE ''
END
AS index_maintenance
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) a
JOIN sys.indexes b ON b.object_id = a.object_id AND b.index_id = a.index_id
JOIN sys.objects c ON c.object_id = b.object_id
JOIN sys.schemas d ON d.schema_id = c.schema_id
WHERE a.index_id > 0 AND c.name NOT LIKE 'sys%' AND a.page_count > @pagecnt
AND a.avg_fragmentation_in_percent > @reorg_pct