29 June 2010

💎SQL Reloaded: Number of Records I (The CLR Version)

    While scanning rapidly the daily posts from MSDN blogs, I stopped on Chris Skorlinski’s “Example code executing TSQL for each Table in a database” post, wondering then on whether he’s using other techniques than I used in the past. I run into this type of need several times, especially in cases in which I wanted to report in an automated manner the number of records for a given set of tables, typically during ETL tasks. The problem could be easily solved with the help of a cursor, though it’s not the most elegant technique, even if it solves the problem fairly easy. I remember how I tried to find a way to execute a dynamic statement using a dynamic script inside of a  UDF (user-defined function), though that’s not possible because a stored procedure can’t be executed inside of a function. An alternative would be to use the OPENROWSET function and execute a statement using a linked server, though the OPENROWSET function doesn’t work with dynamically created statements.

    Five years ago, soon after SQL Server 2005 RTM appeared on the market, I tried to create a UDF programmatically using an example from Bob Beauchemin & co.’s “A First Look at SQL Server 2005 for Developers” book. The example worked then, but not without some inherent headaches, having to spent some time on troubleshooting the received errors, and if I remember correctly they were caused by a change in the way the connection was created from the SQL Server SQLClient. Anyway, that’s already history, though since then I was wondering on whether is possible to use a programmatically created UDF to return the number of records for a given table sent as parameter. Since then I haven’t had the chance to test that, even if it seemed logically possible, but seeing Chris Skorlinski’s post and as last week I installed Visual Studio 2010 on my home computer, I said to myself that that’s a nice way to see the tool at work. Thought and done, but again not without problems, running in several small but time consuming issues.

    So armed with some patience, I did a quick search on Microsoft site on how to use CLR scalar-valued functions under SQL Server 2008, running into this MSDN resource, which actually returns the record count using a static query. So I opened Visual Studio 2010 and created a new Database/SQL Server/Visual Basic SQL CLR Database Project named SQLServerProject1 (I forgot to give the project a meaningful name) and added a database reference to the AdventureWorks database using Windows Authentication.

CLR UDF - Database Reference

    Then in the new created project I’ve added a UDF to the project (Add/New Item/User Defined Function), naming the new created class as CLRLibrary, I replaced the existing function with the one from the MSDN article, added a new String input parameter called TableName, modified the query and changed UDF’s name to NumberRecords as per below piece of code:
 
Imports System 
Imports System.Data 
Imports System.Data.SqlClient 
Imports System.Data.SqlTypes 
Imports Microsoft.SqlServer.Server 
Imports System.Runtime.InteropServices 'added manually 
 

Partial Public Class UserDefinedFunctions 
SqlFunction(DataAccess:=DataAccessKind.Read)> _ 
Public Shared Function NumberRecords(ByVal TableName As String) As Integer 
Using conn As New SqlConnection("context connection=true") 
conn.Open() 
Dim cmd As New SqlCommand("SELECT COUNT(1) AS [NumberRecords] FROM " & TableName, conn) 
Return CType(cmd.ExecuteScalar(), Integer) 
End Using 
End Function 
End Class 

    In addition, I had to add also a reference to System.Runtime.InteropServices, and now the function was ready for testing so I built the solution; then I copied CLRLibrary’s Full Path in SQL Server Management Studio and attempted to follow the steps described in the MSDN article. Now it’s time for some troubleshooting fun… First I enabled the CLR integration configuration option by running the below script:

--enable CLR integration use master 
go sp_configure 'clr enabled', 1  

go 
reconfigure 
go 
       
    Using the master database I attempted to create a reference to the assembly, using the Full Path and the dll name.
 
CREATE ASSEMBLY SqlServerProject1 
FROM 'D:\\SqlServerProject1.dll' 
WITH PERMISSION_SET = EXTERNAL_ACCESS 

   The next step was to declare the function:

CREATE FUNCTION dbo.NumberRecords(@table_name nvarchar(100)) 
RETURNS int  
AS  
EXTERNAL NAME SqlServerProject1.[SqlServerProject1.UserDefinedFunctions].NumberRecords 
GO  
 

    Actually is the final form of the DDL script, because initially I used only a combination of the Project, class and function name (e.g. SqlServerProject1.UserDefinedFunctions.NumberRecords), receiving the following error message:
Msg 6505, Level 16, State 2, Procedure NumberRecords, Line 1
Could not find Type 'UserDefinedFunctions' in assembly 'SqlServerProject1'.

    After a quick search on Google I found out from Eggheadcafe forum that the Namespace was missing from the declaration, and once I made this change the above statement worked without problems. The next step was naturally to test on whether the function is working:

SELECT dbo.NumberRecords('AdventureWorks.HumanResources.Department')  

And it worked! So it’s time to test it against a set of tables from AdventureWorks database:
 
SELECT s.name [schema_name]
   , T.name table_Name 
, dbo.NumberRecords('AdventureWorks.' + s.name + '.' + t.name) NumberRecords 
FROM AdventureWorks.sys.tables t 
      JOIN AdventureWorks.sys.schemas s 
        ON t.schema_id = S.schema_id  
ORDER BY s.name  
, T.name 

  Actually I missed some important thing from the above steps, somewhere while attempting to test the function or to register the assembly, I received the following error message:
A .NET Framework error occurred during execution of user-defined routine or aggregate "NumberRecords": System.InvalidOperationException: Data access is not allowed in this context.  Either the context is a function or method not marked with DataAccessKind.Read or SystemDataAccessKind.Read, is a callback to obtain data from FillRow method of a Table Valued Function, or is a UDT validation method.

   A post on MSDN SQL Server forum remembered me that I forgot to check on whether I’m using the 3.5 or 4.0 .Net Framework. So in the SQLServerProject1 project, from main menu/Project/SQLServerProject1 Properties/Compile/Advice Compile Options/Target Framework, I changed the value to .Net Framework 3.5. This change requires the project to be closed and reopened.

    In between the above steps I had also to check on whether the assembly was registered using the below script:

--assembly file details 
SELECT A.name [Assembly] 
, AF.name File_Path 
FROM sys.assemblies A 
     JOIN sys.assembly_files AF 
        ON A.assembly_id = AF.assembly_id 
WHERE A.is_user_defined=1 
--AND A.name='SqlServerProject1' 

    Also, if you’d like to remove the assembly from the database, you’ll have first to drop the function, and only then drop the assembly:

-- cleaning up
DROP FUNCTION NumberRecords 
DROP ASSEMBLY SqlServerProject1 

    It can be further discussed on whether this approach is acceptable as performance or not, on whether it can be misused, etc. Actually a nice misuse is the following statement which passes a WHERE constraint with the table name.

SELECT dbo.NumberRecords('AdventureWorks.HumanResources.Department WHERE Name LIKE ''P%''') 

    It could be created a second parameter for passing the WHERE constraint or, why not, pass the whole query as parameter, as long only a scalar is returned. I hope I haven’t forgot any step, I’m kind of mixing the facts because I attempted also to create the same function using a new created login, running into other type of issues that maybe deserve a second post.

27 June 2010

🛒Market Review: What’s New in Microsoft World II

Microsoft Office - Cloud Computing is the Word

    Two weeks ago, on 15th of June 2010, Microsoft Office was shipped together with Visio and Project 2010, closing the cycle of releases started with SQL Server 2008 R2, Visual Studio 2010, Sharepoint 2010 (all 3 shipped in April 2010) and Windows Azure (available also in April). The words that describe/unite at best these software tools is cloud computing and collaboration, why that? First we have to consider Azure, the new product from Windows’ portfolio, a framework for cloud computing and SaaS (Software as a Service) architectures, and composed of 3 components, namely Windows Azure which allows running applications and accessing data in the cloud, SQL Azure Database  provides data services in the cloud, while Windows Azure platform AppFabric allows the communication between the applications residing in the cloud. Also MS Office 2010 is part of Microsoft’s strategy toward cloud computing, the weight falling on SharePoint 2010, a business collaboration platform that together with the other MS Office tools allow to manage information, automate and manage business processes, facilitate decision making process, etc. A cornerstone of the framework is the co-authoring tool that “allows multiple people to work on a single copy of a document at the same time or at different times, seamlessly, whether they are online or offline”. As it seems are provided also “community features that allows users to share data as they do on Twitter and Facebook”, a step toward social computing. Microsoft plans to offer an online version of Office 2010, called Office Web Apps (OWA), supposed to be also a competitor for Google Docs.

    There are also people who question the steps done by Microsoft toward cloud computing, but in the end is important to establish the software infrastructure in which cloud computing-based applications could be developed, futures that don’t exist currently could appear in future versions or could be provided by third-party vendors.

    Microsoft comes also with some unpleasant surprises, as it seems Microsoft’s SharePoint Server runs only on 64-bit hardware and requires also a 64 bit SQL Server edition, and this could be quite an important constraint for many customers. The most unpleasant surprise is that Microsoft renounces to the well-known upgrade schema, the reason for that, as mentioned in Ars Technica quoting a Microsoft spokesman, from the need to simplify the product lineup and pricing, based on “partner and customer feedback” (I’m sorry but I can’t really buy that!). The same source expects that upgrades will be available with promotions, after Office’s launch. The only promotion I heard of is Microsoft Office 2010 Technology Guarantee program but if refers only to the customers who “purchased, installed, and activated a qualifying Microsoft Office 2007 product between March 5, 2010, and September 30, 2010”, they being eligible to download Office 2010 at no additional cost. How about the ones who bought a Microsoft Office 2007 copy in 2010 but before 5th of March (like I did)?!

Microsoft TechEd North America Sessions are Online

    The Microsoft TechEd North America sessions held in New Orleans were made available online (video and slides), an opportunity for technical professionals to get an overview on the new advancements in Microsoft technologies, being approached topics related to the various platforms of Windows, MS Office, Dynamics, Web, Cloud Computing & Online Services, etc. I really like the way Microsoft makes its technologies available to the public, especially the fact that it provides also Express versions of their software, allowing newbies and developers to get acquainted and use essential basic functionality. The MSDN, TechNet, webcasts, Channel9  and community and personal blogs bring the technical and non-technical closer to the company and its technologies.

25 June 2010

💎SQL Reloaded: Trivial Equalities in Queries

    If I remember correctly from the Math literature, equalities of the type 0=0, 1=1 or more general n=n could be referred also as trivial equalities. I tried to find, without success though, an exact definition of what is intended by trivial equality, the closest I could get is the Wikipedia’s content on the use of trivial adjective in Mathematics for objects that have a simple structure. As per my perception, not sure if I’m correct, a trivial equality is an equality in which one of the members is a constant, the same definition holding also for trivial inequalities.

    Now, what have trivial equalities like 1=1 have to do with queries?! As 1=1 always equates to true, same as 1=0 equates to false, could be used in certain scenarios as a simple technique to return all the records, respectively no records from a database. For example given the fact that 0=1 equates to false I used such a constraint under ADO in queries like the below one in order to retrieve metadata from a database.

SELECT *  
FROM Production.Product 
WHERE 1=0 

    In the same way 1=1 could be used to retrieve all the records from a table:

SELECT *  
FROM Production.Product 
WHERE 1=1 

    Now if in above query we add different constraints the 1=1 allows to comment/uncomment the constraints as we wish with a minimal of changes.

SELECT *  
FROM Production.Product 
WHERE 1=1 
AND Color = 'Black' 
--AND SafetyStockLevel>800 
AND StandardCost>300 

    This makes it quite useful when creating dynamic queries (see the first constraint from the stored procedure given as example in Just In Case – Part IV: Dynamic Queries post).  I actually have seen quite often this type of writing a query, especially between Oracle developers. Long time ago I asked somebody what’s the consideration behind its use, excepting the fact that it allows to add or remove (comment) constraints. The answer was quite fuzzy, being mentioned a possible improvement in query’s performance. Some time ago I remember somebody was mentioning that in older SQL Server database engines such a constraint could lead to an unexpected query plan, and thus poorer performance. Normally the database engine should recognize such statements as meaningless, and ignore them, however from theory to implementation is a long way and anything is possible. An example in this direction could be found in The real cost of performance, the post and the comments that followed revealing the various facets of using equality sign in programming languages like C# or VB. SQL shouldn’t be so complex, as it doesn’t typically used in the work with objects.

    I used such trivial equalities also in cartesian JOINs as an artifice in order to use ANSI syntax in queries, here’s an example using the SplitList table-valued function introduced long time ago:

SELECT * 
FROM dbo.SplitList('1,2,3,4', ',') A  
       JOIN dbo.SplitList('A,B,C,D', ',') B  
          ON 1=1 

    Given the above definition for trivial equalities, in theory all constraints in which one of the members is a hard-coded could be regarded as trivial. Actually all numeric data type constraints could be written as trivial equalities by moving all attributes in one of the members, the other being a constant, for example:

SELECT P.Name Product 
, PM.Name ProductModel 
FROM Production.Product P 
        JOIN Production.ProductModel PM 
            ON P.ProductModelID - PM.ProductModelID = 0 

    Even if such a statement is logically correct, it should be avoided when possible, given the fact that it decreases considerably the performance of the query (just compare the query plan between the above query and the typical join), indexes, if exist, not being used efficiently.

20 June 2010

🎡SSIS: Using Oracle as Data Source

Making SSIS work with Oracle doesn’t seem to be a complicated task especially after several years of experience in doing that, though, as usual, something new appears in the landscape – new software versions, new requirements, an environment with its own particularities, etc. In general, when an application needs a connection to an Oracle server, is needed to install on the client computer several components that come with Oracle Client, add/configure the TNS (Transparent Network Substrate) name, eventually set-up some global variables in case the components were not installed in the default location, and depending on the chosen provider might be needed to configure also a DSN (Data Source Name) pointing to the Oracle server. SSIS makes actually no exception from these steps, once the steps performed you should be in theory ready to develop, test, deploy, schedule and run packages using a connection to an Oracle server.

Unfortunately, from my experience, every 2-3 installations, there is a problem with the Oracle Client and its configuration, most of the times the solution being quite simple – removing some declarations from the set-up files, correcting the global variables or the TNS name. As it seems the Oracle Client is quite sensible to the changes in the default installation path, therefore is indicated to install the Client using the default path unless you’d like to gain more experience in troubleshooting such installation issues. In addition, as any respectable company, some of the products come with their own defects, the patching of such issues being not so easy, therefore it’s advisable to check beforehand the known issues coming with the Client version you’d need to install, preferably in case you need to take advantage of the latest Oracle features, it makes sense to install the latest stable Client.  

Because the Oracle Client downloadable package has a few hundred MB, there is a thinner Oracle alternative to Oracle Client, namely Instant Client package, the components could be downloaded from Oracle site (here) and installed individually, as a minimum being necessary to install Instant Client Package – Basic or Basic Lite versions, the ODBC libraries and the SQL *Plus libraries in case you want to test the connection. Of course, after case could be installed also the JDBC, SDK or any other packages Oracle made available.

On 64 bit platforms might be needed to install in parallel the 32 and 64 bit Oracle/Instant Clients (see SSIS, Oracle and X64 post from Business Vision DEV Team), while in order to troubleshoot the various issues could be a good idea to check the differences between 32 bit and 64 bit registry (here). 

There are several drivers that allows you to connect to an Oracle database using SSIS, the most popular ones:
- Microsoft OLE BD Provider for Oracle
- Oracle Provider for OLE DB
- .Net Framework Data Provider for Oracle
- .Net Framework Data Provider for Odbc
- Oracle Data Provider for .NET

The Microsoft drivers come with MDAC (Microsoft Data Access Components), its latest version being MDAC 2.8. Starting with Windows Vista and Windows Server 2008, Microsoft changed MDAC into WDAC (Windows Data Access Components), including it as part of the operating system, removing thus the need to redistribute the components. See Data Access Technologies Road Map, FAQ and Troubleshooting MDAC/WDAC for MDAC/WDAC architecture, components and releases, respectively troubleshooting. Given the various issues that exist with a particular MDAC/WDAC library, see in KB301202 how you could check the current version by using the registers, while for  Windows version up to Windows Server 2003 could be used the Component Checker MDAC Utility.

If .Net Framework Data Provider for Oracle, respectively for Odbc come with the .Net framework, in exchange, Oracle Data Provider for .NET, see also FAQ is Oracle’s implementation for ADO.NET data access,  supposed to take advantage of advanced Oracle database functionality, it comes with ODAC (Oracle Data Access Components). Please note that you might need in install the respective components in addition to the Oracle Client.

There are also third party drivers for Oracle or even particularly for SSIS, for example Microsoft Connectors Version 1.1 for Oracle and Teradata for use with SSIS from Attunity; see also a few tips from SQL Server Performance blog.

Before taking any decision on which driver to use, it might be a good idea to look also at drivers’ limitations and advantages. In the past I often used the Microsoft Oracle ODBC Driver until I run into in an important limitation, namely its inexistent support for unicodes, this residing, according to KB244661, in the fact that “from Microsoft Data Access Components (MDAC) version 2.5 and later versions, both the Microsoft ODBC Driver and OLE DB Provider support ONLY Oracle 7 and Oracle 8i”. Also Oracle Provider for OLE DB seems to have its own limitations.

SQL Server provides linked servers, the powerful functionality of executing commands against OLE DB data sources on remote servers, including Oracle. As linked servers offer the ability to create cross-vendor distributed queries, in certain scenarios they could prove to be a powerful alternative of querying Oracle databases, in such cases not being needed to create in SSIS package an additional connection to Oracle. KB280106 and an article on Oracle Provider for OLE DB describe how to set up and troubleshoot a linked server to an Oracle database in SQL Server.

You might want to check also the SQL Server Integration Services with Oracle Database 10 White Paper coming from Microsoft. “Connectivity and SQL Server 2005 Integration Services”  MSDN article written by Bob Beauchemin, Scott Barrett’s blog or this SSIS 64 bit – Using Oracle Provider post.

💎SQL Reloaded: Troubleshooting Query Issues III (Logical errors)

    In last post I provided some general guidelines on how to troubleshoot query issues related to errors thrown by the database engine. That’s a fortunate case, because engine’s validation shows that something went wrong. I’m saying fortunate because there are also cases in which there are also logical errors that result in unexpected output, and the errors could remain even for years undiscovered. It’s true that such errors are hard to discover, but not impossible; adequate testing combined with defensive programming techniques could allow decreasing the volume of such errors. The simplest and handiest test could be performed by looking in a first phase at the number of records returned by the query, and secondly by studying the evolution of cardinality with each table added to the query. The clearest signal that something went bad is when no records, too less or too many records are returned (than expected), then the problem most probably resides in the constraints used, namely in JOIN, WHERE or HAVING clause constraints.

WHERE clause constraints

    If your query is just a simple statement without joins of any type, then more likely the problem resides in the constraints used in the WHERE clause, at least one of them being evaluated to false. Here are several possible reasons:
- One of the hardcoded values or one of the provided parameters is not found between the values available in the table;
- Not treated NULL cases;
- Wrong use of predicates or parenthesis.

    The simplest constraint evaluated to false is “0=1” or something similar, actually can be used any similar values (e.g. ‘A’=’B’, ‘ABC’=’’, etc.). In contrast “1=1” is evaluated always to true. The below query returns no records because the constraint is evaluated for false:
 
-- single constraint evaluated to false 
SELECT * 
FROM Production.Product 
WHERE 1=0 

    A more complex scenario is when multiple predicates and at least one equates always to true or to false, the task in such situations being to identify such constraints:

-- multiple constraints, one evaluated to false 
SELECT * 
FROM Production.Product 
WHERE 1=0 
    AND Class = 'L' 
    AND SafetyStockLevel >100 

    The constraints could be grouped also using parenthesis:

-- multiple constraints with parenthesis, one evaluated to false 
SELECT * 
FROM Production.Product 
WHERE 1=0 
    AND ((Class = 'L' 
    AND SafetyStockLevel >100) 
      OR (Class = 'M' 
    AND SafetyStockLevel =100)) 

    Dealing with multiple such constraints is not a question of guessing but pure applied Boolean algebra essential in writing accurate queries. So if you are feeling that you’re not mastering such concepts, then maybe it’s a good idea to consult some material on this topic.

    There could be situations in which the total outcome of a set of constraints it’s not so easy to see, in such cases the constraint could be brought in the SELECT statement. Which constraints should be tested depends also on the particularities of your query, here is an example:  

--testing constraints' outcome for each record 
SELECT * 
, CASE  
     WHEN Class = 'L' AND SafetyStockLevel >100 THEN 1 
     ELSE 0 
  END Constraint1  
, CASE  
     WHEN Class = 'M' AND SafetyStockLevel =100 THEN 1 
     ELSE 0 
  END Constraint2 
, CASE  
     WHEN (Class = 'L' AND SafetyStockLevel >100) 
        OR (Class = 'M' AND SafetyStockLevel =100) THEN 1 
     ELSE 0 
END Constraint3 
FROM Production.Product  

      
  The constraints in which one of the members is NULL and the IS NULL or Null functions are not used, are incorrect evaluated, actually ignored, and in certain cases they might look even correct, though this depends also on the expressions used. For example the below query will return no records, while the next one will return some records. For more on the correct handling of NULLs see Null-ifying the world or similar posts on the web.

--misusing NULL-based constraints 
  SELECT * 
FROM Production.Product 
WHERE (NOT 1=NULL) OR (1=NULL) 
 
JOIN constraints

    JOINs based on single JOIN constraints should in theory pose no problem unless the wrong attributes are used in the JOIN and that’s so easy to do especially when no documentation is available on the topic or it is incomplete. Functional or other technical specifications, physical or semantic models, metadata or other developers’ knowledge could help the developer to figure out about the relations between the various entities. On the other side it’s in human nature to make mistakes, forgetting to add one constraint, using the wrong type of join or the wrong attributes in the join constraint. In addition all the situations described above for WHERE clauses constraints apply also to join constraints, independently on whether the ANSI or non-ANSI syntax is used, the later based on writing the JOIN constraints in the WHERE clause, making the identification of the JOIN constraints not so easy to see, this being one of the reasons for which I recommend you to use ANSI syntax.

    A technique I use on a daily basis in order to test my scripts is to check the changes in the number of records with the adding of each new table to the initial query. In a first phase is enough to count the number of records from the table with the lowest level of detail, and the number of records from the final query, this ignoring the WHERE constraints. For example let’s consider the following query written some time ago (most of attributes were removed for convenience):

-- Purchase Orders 
 SELECT POD.PurchaseOrderDetailID 
, POD.PurchaseOrderID 
, POD.ProductID  
, POH.ShipMethodID 
, POH.VendorID 
FROM Purchasing.PurchaseOrderDetail POD 
     JOIN Purchasing.PurchaseOrderHeader POH 
       ON POD.PurchaseOrderID = POH.PurchaseOrderID 
           JOIN Purchasing.ShipMethod PSM 
              ON POH.ShipMethodID = PSM.ShipMethodID 
           JOIN Purchasing.Vendor SMF 
              ON POH.VendorID = SMF.VendorID  
     JOIN Production.Product ITM 
        ON POD.ProductID = ITM.ProductID 
WHERE POD.StockedQty >100 
     AND ITM.Class = 'L' 
 
    Usually I write the table with the lowest level of detail (in theory also with the highest number of records) first, adding the referenced table gradually, starting with the tables with the lowest number of records, thus the lookup tables will come first. This usually doesn’t affect the way the database engine processes the query (unless special techniques are used), and it allows to use kind of a systematic approach. This allows me also to test the query without making important changes. As I wrote above, the first test is done against the first table, for this commenting the other joins and eventually the WHERE constraints:

-- testing changes in the number of records - test 1 
SELECT count(1) NumberRecords 
FROM Purchasing.PurchaseOrderDetail POD 
/* JOIN Purchasing.PurchaseOrderHeader POH 
     ON POD.PurchaseOrderID = POH.PurchaseOrderID 
          JOIN Purchasing.ShipMethod PSM 
             ON POH.ShipMethodID = PSM.ShipMethodID 
          JOIN Purchasing.Vendor SMF 
             ON POH.VendorID = SMF.VendorID 
      JOIN Production.Product ITM 
         ON POD.ProductID = ITM.ProductID 
WHERE POD.StockedQty >100 
AND ITM.Class = 'L' */ 
       
    In the second step is enough to check the number of records returned by the whole query without the WHERE constraints:

-- testing changes in the number of records - test 2 
SELECT count(1) NumberRecords 
FROM Purchasing.PurchaseOrderDetail POD  
     JOIN Purchasing.PurchaseOrderHeader POH 
        ON POD.PurchaseOrderID = POH.PurchaseOrderID 
            JOIN Purchasing.ShipMethod PSM 
               ON POH.ShipMethodID = PSM.ShipMethodID 
            JOIN Purchasing.Vendor SMF 
               ON POH.VendorID = SMF.VendorID  
      JOIN Production.Product ITM 
          ON POD.ProductID = ITM.ProductID 
/* WHERE POD.StockedQty >100 
     AND ITM.Class = 'L' */       

      
    If the number of records is correct then most probably the query is correct, this because there could be incorrect LEFT JOINS that are not reflected in the number of records but in the fact that the corresponding attributes are missing. So don’t rely entirely on this type of test, but take 1-2 examples for which you are sure that records must be retrieved from the LEFT JOIN, and check whether the expected values are shown. Eventually, only for testing purposes, the LEFT JOIN could be modified as a FULL JOIN to see what records are returned.  

    Now if there are changes in the number of records from one test to the other, then take the first query and move the comment one join further (see below), and repeat the step until the JOIN that causes the variances is identified. 

-- testing changes in the number of records - test 3  
SELECT count(1) NumberRecords 
FROM Purchasing.PurchaseOrderDetail POD 
JOIN Purchasing.PurchaseOrderHeader POH 
ON POD.PurchaseOrderID = POH.PurchaseOrderID 
/*      JOIN Purchasing.ShipMethod PSM 
           ON POH.ShipMethodID = PSM.ShipMethodID 
         JOIN Purchasing.Vendor SMF 
            ON POH.VendorID = SMF.VendorID 
     JOIN Production.Product ITM 
        ON POD.ProductID = ITM.ProductID 
WHERE POD.StockedQty >100       
AND ITM.Class = 'L' */          

 Once the join that causes the variances is found, then might be requested then to add a new join constraint or use a LEFT JOIN instead of a FULL JOIN. For example if in the above query there are ShipMethodID with NULL values, then the query should have been written using a LEFT JOIN. On the other side, if duplicates are found, then do a grouping by using the primary key of the table with the lowest level of detail, in this case the Purchasing.PurchaseOrderDetail.  

HAVING clause constraints

    HAVING clause constraints behave more like WHERE clause constraints, the difference residing in the different levels where the constraints are applied, so the same technique could be applied in this case too, with the difference that the constraints need to be accommodated to support aggregate functions.  

--Vendors with total purchase value >100000 
SELECT POH.VendorID 
, SUM(POH.SubTotal) TotalSubTotal 
FROM Purchasing.PurchaseOrderHeader POH 
WHERE POH.Status IN (2,4) -- 2-Approved, 4-Complete  
GROUP BY POH.VendorID 
HAVING SUM(POH.SubTotal)>100000 

    Sometimes it helps to bring the HAVING constraints into the SELECT, though that offers limited functionality, therefore it’s more productive maybe to re-link the aggregated query back to the base table(s) on which the statement is based:

--Vendors with total purchase value >100000 - base records 
SELECT A.* 
, B.TotalSubTotal 
, CASE 
     WHEN B.VendorID IS NOT NULL THEN 1 
     ELSE 0 
END ConsideredFlag 
FROM Purchasing.PurchaseOrderHeader A 
LEFT JOIN ( --aggregated data 
      SELECT POH.VendorID 
      , SUM(POH.SubTotal) TotalSubTotal 
      FROM Purchasing.PurchaseOrderHeader POH 
      WHERE POH.Status IN (2,4) -- 2-Approved, 4-Complete  
      GROUP BY POH.VendorID 
HAVING SUM(POH.SubTotal)>100000 
) B 
     ON A.VendorID = B.VendorID 
WHERE A.Status IN (2,4) -- 2-Approved, 4-Complete  

  
 
  Also this technique could prove to be limited, though I found it useful in several cases, especially when a complicated formula is included in the aggregated function. The same query could be rewritten with a window aggregate function introduced with SQL Server 2005.

--Vendors with total purchase value >100000 - window aggregate function 
SELECT POH.* 
, SUM(POH.SubTotal) OVER (PARTITION BY POH.VendorID) TotalSubTotal 
, CASE 
     WHEN SUM(POH.SubTotal) OVER (PARTITION BY POH.VendorID)>100000 THEN 1 
     ELSE 0 
END ConsideredFlag 
FROM Purchasing.PurchaseOrderHeader POH 
WHERE POH.Status IN (2,4) -- 2-Approved, 4-Complete 
 
    When working with amounts, quantities or other number values, it could be useful to check the total amount/quantity based on the query logic against the total amount/quantity based on the base table.

💎SQL Reloaded: Troubleshooting Query Issues II (Errors)

    Some weeks ago I wrote a post on troubleshooting, looking at the topic from problem solving perspective. In what concerns the troubleshooting of SQL scripts, now it depends also on the type of DML statement used or on the type of issues the developer deals with – errors thrown by the database engine, wrong number of records returned by a query, performance, accessibility or reusability related issues.

    Many of the errors come normally from wrong usage of syntax and functionality or from misspells, in many situations the database query engine providing enough information, leading the developer closer to issue’s resolution, at least by narrowing the range of search, and here the problem solving approach could be useful. In such situations it is useful to do a quick search in the local/online documentation or resources using the error number or the full/partial error message. I know that’s not rocket science, though I’ve seen people acting like hypnotized by the error message, not knowing what do to next. A quick search using a powerful search engine like Google.com, Bing.com or Yahoo.com could lead you to the solution or help gathering more data about the possible causes. More likely that there was somebody in the same situation and has posted a question or even the answer somewhere in a forum or a blog. It’s always a good idea and time-effective to do a little research before posting a new question in a forum. Especially between beginners there are people who don’t know to make use of the documentation, help tools or even a search engine, therefore it could be useful for them to take some time and look on some tips on how to make better use of such tools.

    In general when an error is thrown is provided an error number that has a specific meaning for the vendor, a general description, and in fortunate cases also the line where it occurs. For example in SQL Server the execution of the below query based on AdventureWorks database will return the next error message:

SELECT * 
FROM dbo.Purchasing.PurchaseOrderHeader POH 

Msg 208, Level 16, State 1, Line 1
Invalid object name 'dbo.Purchasing.PurchaseOrderHeader'


   Even if in this case the error was created deliberately by using the name of a schema (e.g. dbo) instead of database’s name, this doesn’t mean that the cause/solution is so simple. For example the same error message could be thrown by an issue related to KB837970, KB310882, KB289551 and several other articles that could be found by using the following search text in Google:
site:support.microsoft.com Microsoft.com "Msg 208, Level 16"

    Unfortunately in what concerns the errors appearing in queries, the localization of errors is not so clean, depending also on the order in which the query is parsed by the engine, thus the issue might be shown as occurring at a line other than its actual location, namely at the line where the query parsing finds an issue.
  
    When the error message is in other languages than English, it makes sense to try to find the corresponding error message in English and use it in search because, as a quit important volume of documentation and other type of information are in English, the chances to find something are higher. Normally that could be done by using the error code, which in theory should have the same meaning independently of the language used.

    Sometimes, when you didn’t knew how to do something and just used your inspiration and tried to use some functionality, and you are not fully aware of its implications and constraints, then it makes sense to consult the documentation or do a search on the web on how to do a certain task (e.g. How to load data from Oracle to SQL Server). 10 minutes spent in research could help you save a few hours later, so don’t jump blindly into coding! In many situations it could happen that a solution is already available, and if you find already the code, please consider not using it blindly, try to understand the how and why, its limitations and how it could be eventually improved, and some sorrow testing might be necessary too. Don’t try to reinvent the wheel just for the sake of it, in what concerns programming there are more likely others who had the same problem and found a neat solution to it. This doesn’t mean that you’ll have to do that for every problem you’ll have, but for tasks having higher complexity than the tasks you are accustomed with, the need for that decreasing with the level of expertise. Doing some research on a given topic can be time consuming, but could prove to be useful especially when other developers used other technique. Actually once you understood the idea, you could attempt to reconstruct the logic with your own coding and styling conventions, of course, not always it makes sense to do that.

Note:
    You’ll have to consider also copyright infringements, but in theory the developers who posted code on the web for further reuse, want only some minimal credit for that (e.g. posting a link to the website where the code is found, mentioning developers’ name, etc.), while in some situations you could reuse the code only if you add to it substantial changes. Both statements apply not only to code but also other type of work (content) available freely on the web (e.g. articles, tutorials, etc.).

    In case the search was unsuccessful then it makes sense to post your question on a forum, specifically in the area closest to your problem. Posting a question in the wrong forum has quite high chances to lead nowhere! Some forum members are quite critical about the users who post questions without doing some research, especially when there is already an answer available on the same forum. On forums, users have to be aware that they are not only wasting their time, but also readers’ time. On the other side, the more the points where the same question is posed, the higher are the chances for others to reach to them, and eventually get an answer, this depending also on how search engines index the web pages. 
 
    When the same error appears for the second time, in theory the developer should know already how to solve it or the possible cause(s). Because the solutions are not always so simple to remember as is in human nature to forget, it makes sense to document the issues by describing the problem and the solutions (preferably with code) to such a level that it could understood also by other professionals.

    When the number of errors occurring in your SQL scripts is quite high, especially when the same type of errors occurs, then maybe it makes sense to take a step back and ask yourself why that happens. Sometimes it just means that is time to take a break, though, more obvious, is maybe needed to review the theory/documentation, make use of defensive programming techniques or use an editor with powerful IntelliSense functionality that checks the syntax and highlights the errors as you type. If you are a beginner, don’t worry, with time coding and troubleshooting becomes much easier.

    All the above ideas are simple common knowledge between developers, though such knowledge comes from (prolonged) experience – a combination of learning by doing and solving issues – “the school of hard knocks”. I just hope the above lines and the coming posts will be helpful to you.

03 June 2010

🧭Business Intelligence: Enterprise Reporting (VII: Addressable Questions in Reports’ Creation I)

Business Intelligence
Business Intelligence Series

The creation of a (data) report of any type involves most of the times several iterations until all the requirements are gathered, the query and eventually the final report are created, the reports gets tested, the issues related to deliveries and requirements mitigated, etc. Typically the steps involved in the creation of a report are straightforward for the developer, though for the user and the other people involved in the process everything might look like a black box, often also from the lack of visibility and communication between the involved parties. Actually at least in this case the black box is not the problem but the input and output that’s highly dependent on the input, not to mention that any information omitted from user’s side could have unexpected impact on output.

In many cases I found out that the information on which a report is created are minimal, for example the user comes to the developer with a question (e.g. what’s the total revenue for the past x years) and he’s expecting the developer to come with the result in terms of a report with data reflecting the answer to the respective question. When formulating the requirements, the user might even come with a list of attributes he would like to see in such a report, some of the attributes might not be possible to be shown given the needed report’s level of detail or that it makes sense to add other attributes (e.g. Unit of Measure, Currency, Price Unit, Quantity) in order to avoid possible confusion, miscalculations, and to facilitate the understanding, testing or usage of such a report. There could be cases in which the user has no really idea of what he wants, being difficult to describe the problem and the exact data he might need, and this could happen also because the user isn’t aware of the data and their structure, processes or the various constraints and other business rules that applies.

 When from the whole initial dataset based on the requested attributes the user needs only a smaller subset, this is translated into a set of constraints (e.g. the report is needed is given only for a single Vendor or a time unit), then it makes sense to consider some of the constraints as parameters, allowing to fit the report to a larger set of requirements. In addition the user might need to use as filter additional attributes not already considered in constraints. For an easier report’s understanding the columns could be positioned in a certain order and the records sorted based on a given list of ordered attributes. For the same reason special formatting could be applied for some of the attributes, for example the values could be right/center/left aligned, the dates written is a predefined format (e.g. DD/MM/YY), the rounded to a given number of decimals, the negative values could be highlighted in a another color, etc.  

When the requirements don’t match the semantic or relational data model, then must be found the adequate level of details, for this maybe being needed to renounce to some of the attributes, eventually split the report in summary and detail report, aggregate some of the values, consider only last/first record from a given subtable, all these aspects needing to be mitigated with the users. Some of the calculations the user needs could be included directly in the report, the exact applied formulas needing to be given as input by users, the special cases needing to be mitigated too.

As the same data could come from different sources, having different timeliness, accuracy, consistency, completeness, availability, in other words with different quality, the adequate data source needs again to be mitigated with the users, in theory needing to be considered the data with the most appropriate data quality. In addition to these constraints could be added also the technical constraints derived from the volume of data, report type, storage type, available time, data transformations needed, reporting platform, etc.

There are several other important facts that should be actually considered in requirements gathering phase, namely on whether a similar report already exists, could be used unaltered or needs to be modified by adding additional attributes, filters, formatting, etc. As all such work needs to have also a ROI argumentation, there are also cases in which the costs of creating such a report are higher then the costs associated with the user preparing the report, on the other side such costs are not always so easy to quantify, though it’s a good idea to have the needed reports as much as possible automated. In such cases might be requested some input from the Functional and IT Managers, the enforcement of policies and processes to deal with such aspects.

The above aspects are gathered in the below set of questions, users and developers should address them altogether, and even if some apply only to developers, some awareness from the users could be beneficial to:
Q1: What’s the problem statement? (What issues tries the report to address?)
Q2: What kind of data would need to be gathered (in order to solve or better understand the problem)?
Q3: What level of detail is needed?
Q4: What attributes are need?
Q5: Is there a similar report (that could be extended or used as template)?
Q6: What are report’s definition, purpose and financial argumentation?
Q7: Does it make sense to invest time, effort and money in creating such a report? What's the estimated effort to build the report? What's the halting point?
Q8: What attributes it makes sense to add/remove?
Q9: What’s the (appropriate) source for the attributes?
Q10: What special filter constraints are needed?
Q11: Which of the filter constraints should be considered as parameters?
Q12: Which are the formulas that need to be applied for calculated values?
Q13: What (default) sorting the user needs?
Q14: What’s the needed order of the attributes in the layout?
Q15: Are there any aggregations that need to be added?
Q16: In what form the report should be delivered?
Q17: What formatting should be applied?
Q18: What’s report’s frequency?
Q19: Who will be report’s owner?
Q20: How many records the average report will have?
Q21: What other technical or logistic constraints qualify?
Q22: When is the report needed?
Q23: What documentation is needed?
Q24: Are there any sensitive information contained?
Q25: Who needs access to the report? Do any externals need access to it?
Q26: Are there further uses for the same data? 
Q27: What's the expected life-expectancy for the report?
Q28: What's the reporting platform on which the report should be built?
Q29: Does it makes sense to split the report in multiple perspectives?
Q30: What's the expected performance for the report? Are there any limitations?

Note:
The above list of questions should not be considered as complete, some of the questions could be merged while some of the question could lead to other questions, especially in the area of technical and logistics constraints that need to be addressed adequately.

Previous Post <<||>> Next Post

Market Review: What's New around the World

Instead of Introduction

    May is gone and it was quite a stressful month because given the fact that I had to start with a new job, a new position I will have to grow in, I had also to find a new apartment, move, install myself, so I barely had time for my blogs, the little time I had spending it reading stuff related to the new requirements, trying to improve my German, etc. Even so, I found a few interesting resources, adding them to the pile of other interesting links I collected over time. So, I realized that maybe it makes sense to group some of them in a set of postings under the title “What's New (In My World)”, hopefully they will be of interest for other people too.

Free Universities and Resources for SQL Developers & Co.

    I was kind of positively surprised to discover that several known authorities in their fields opened their own online Universities for the masses, a new free formal form of study on the web that takes advantage of the latest trends in online learning – webcasts, PowerPoint presentations, blogging posts, online communities, new technologies and applications, etc. The most unexpected appearance is the SQL University having as faculty known professionals and bloggers like Jorge Segarra, Adam Machanic, Pinal Dave or Kendal Van Dyke, to mention only the ones whose blogs I previously known, but also others not less important professionals. The posts cover topics like SSIS, PowerShell, parallelism, database administration, etc. and I find them quite a useful resource of knowledge not only for beginners but also for experienced professionals.

   Early this year I discovered the BeyeUniversity that currently offers a series of 5 free certificate programs for BeyeNetwork members, covering Business Intelligence related topics, the online video courses having an easy to follow format, the content being approachable also by non-professionals. Between the instructors could be recognized Bill Inmon, one of the pioneers in data warehousing, leading the course on Unstructured Data and being also a columnist in BeyeNetwork newsletters, several others webcasts with him being also available. Ralph Kimball, the other important personality in the area of data warehousing has opened the Kimball University and even if its courses are not free, on his website could be found a big collection of articles reflecting the ideas of his school of thought. 

   The MIT OpenCoureWare initiative is quite old, though maybe not so many people know that there are made available also courses on Database related topics, for example Database Systems, Database, Internet, and Systems Integration Technologies or Data Mining, the lecture notes and study materials offering a high level overview of the concepts related to the mentioned subjects. Some of the other courses offer also video or audio content, though more such resources could be found on YouTube EDU, Academic Earth, Lecturefox or similar resources, being covered a growing range of database related topics. Similar such resources could be found also with a simple search using the favorite search engine, many Universities have made available online, voluntarily or involuntarily their text, video or audio lecture notes.

Microsoft News

    With SQL Server 2008 R2, Visual Studio 2010 and MS Office 2010 Microsoft closes a new cycle of releases, the products reflecting Microsoft’s philosophy and the degree to which the company made it to hold their promises. It will be interesting to see how the products will be regarded by professionals and press, how they fit in the general technology trends, how they could be used together in order to provide highly scalable and available solutions. I really hope that Microsoft have made some important steps in bridging the gaps existing in their tools integration on 64 bit version, and that the new added functionality makes a difference.

24 April 2010

🧭Business Intelligence: Troubleshooting (Part I: A Problem Solving Approach)

Business Intelligence Series
Business Intelligence Series


Introduction

In several occasions I observed that there are SQL and non-SQL developers who don’t know how to troubleshoot a programming problem in general, respectively a SQL-related issue in particular, and I’m not referring here to the complex problems that typically require the expertise of a specialist, but simple day to day situations: troubleshooting an error thrown by the database engine, an error in the logic, a performance issue, unavailability of resources, etc. I’m not necessarily talking here about the people posting questions on forums, professional networks or blogs, even if in many situations they could have found an answer to the problem by doing a little research, but seeing developers actually at work. It’s true that there are also many cases in which the software throws an error message that you don’t know from where to start or that the error is pointed as appearing at other line than at the line where actually the error occurs, leading the developer to a false thread.

Before going into detail let’s take a short look at troubleshooting and what it means! Paraphrasing Wikipedia’s general definition for troubleshooting, troubleshooting in IT is a type of problem solving applied to software and infrastructure related issues. Software issues refer not only to the various types of errors thrown by software applications, but also to functional, rendering or configuration errors, performance issues, data quality issues, etc. Infrastructure related issues could refer to the IT infrastructure – network, information systems, processes, methods or methodologies used. In this post I will refer only to the software issues even if the technique(s) for troubleshooting this kind of issues could be applied also to infrastructure issues.

Polya’s Approach to Problem Solving

In his book 'How To Solve It', G. Polya, a well known Hungarian mathematician, advances a 4 step natural approach in solving a problem: 1. understanding the problem, 2. devising a plan, 3. carrying out the plan, and 4. looking back [1]. G. Polya’s approach could be used for all types of problems, including IT problems, and even if I find this approach too high level for solving this type of problems, it’s actually a cornerstone on which more detailed approaches could be used. Let’s look shortly at each of Polya’s four steps!

1. Understanding the problem 
 
Understanding the problem resumes in identifying what is known, the data, the actual facts, and what is not known, what causes the issue and how it will be solved. Somebody was saying that a problem well understood is half solved, and there are quite good chances to arrive to the wrong solution if the problem is not well understood. If in Mathematics the problem is defined beforehand together with the whole range of constraints, in IT for example, when troubleshooting the problem needs to be defined,  in the context of this post the problem revolving around a technical or business issue appearing in the form of an error message, un unexpected/wrong application behavior, wrong process, etc. Thus the actual facts could resume to the error message, current vs. expected behavior, tools used, high/low level design, business logic, affected objects, specific constraints, etc. 
 
Defining the issue/problem might not be as simple as it seems, especially when the issue is pointed by other people in informal non-technical terminology, fuzzy formulations like “there is an error in the XYZ screen” without actually detailing what the issue is about, the steps followed and the input that resulted in the respective issue, and other such aspects that need to be addressed in order to understand the problem. All these aspects are not known by the developer though with a little investigation they are transformed in known information, this involving communication with the users, looking in documentation, and gathering any other facts. Actually we could group all this actions under “gathering the facts” syntagma, and this type of operations could be considered as part of this step because they are intrinsic in what concerns problem understanding.

2. Devising a plan 
 
In this step is attempted to find the connection between the data and the unknown, looking at the problem from different angles in order to obtain an idea of the solution, to make a plan [1]. We have a plan when we know which steps we have to follow in order to identify the issue (solve the problem), they don’t have to be too detailed, but addressable, not necessarily complete but as a base that could be evolved with time, for example when new information/results are found. It could be multiple directions to look into, for example based on possible list of causes, constraints the various features comes with, different features for implementing the same thing, etc. 
 
Naturally the first question a developer should ask: have I seen this issue before in actual or slightly modified form? Could be the problem broken down to smaller (known) problems? Could be derived anything useful from the data, have been considered all essential notions involved in the problem [1]? Essential notions, that’s always a thing to look into, mainly because I would say that many issues derive from feature constraints or from misuse of features. There could be used tools brainstorming, check lists, root-cause analysis, conceptual mapping, in fact any tool which could help us to track the essential notions and the relations between them.

3. Carrying out the plan 
 
Once the plan sketched, we could go on and approach each of the branches of the plan, performing the successive steps in one branch until we find an end-point (a point in which we can’t go further). There could be branches going nowhere, multiple solutions, or no apparent solution to the problem. Everything is possible… More likely while advancing in carrying out the plan, we could discover other intermediary steps, other branches (alternatives of arriving to the same result or to approach different constraints).

4. Looking back 
 
According to Polya, this step resumes to examining the solution [1], reviewing the argumentation used, solution’s construction, on whether the solution is optimal, on whether it could be reused to solve other types of problems or whether it could be improved/refactored. Actually this is a step many developers completely ignore, they found a solution, it’s working so their work is done! No, even when pressed by time should be considered also these aspects of problem solving, and from my point of view this step includes also steps like documenting the issue, and in special cases communicating the solution found to the circle of professionals (e.g. in terms of best practices or lessons learned, why not a blog post, etc.). Topics like optimality and refactoring  and are quite complex and deserve a post of their own, therefore I will resume myself to mention only the fact that they typically consider the solution from the point of view of performance, complexity, (re)usability and design, the developer having to trade between them and other similar (quality) dimensions.

Beyond Polya’s Approach

A natural question: do we really have to follow this approach?! Come on, there will be cases when you’ll have the solution without actually attempting to define the problem (explicitly) or devise a plan (explicitly), or only by listing the scope and the constraints! Unconsciously we are actually following the first three steps, but forget or complete ignore the fourth, and I feel that applying Polya’s approach brings some “conscious thought” in this process that could help us make the most of it. 
 
In many cases the solution will be there in documentation, giving developers some explicit or implicit hints about the areas in which to search, for example in case of an error related to a query a first input is the error message returned by the database engine. Fortunately RDBMS vendors like Microsoft and Oracle provide also a longer description for each error, allowing thus to understand what the error message is about. This is the happiest case, there are also many software tools that after they run half of hour, they return a fuzzy error message (e.g. ‘an error occurred!’), and nothing more.

Thank God for the Internet, a dynamic knowledge repository, in which lot of knowledge could be find with just a simple click, but also sensitive to the input. In many cases I could found one or more solutions or hints for an error I had, usually just by copy pasting the error number of the error description, or when the description is too long, only the most important part. I observed, that there is an quite important number of professionals that prefer to post their issue in a forum or professional group instead of doing some research by themselves, this lack of effort helping to increase the volume of redundant information on the web, this coming with negative but also positive implications. 

When we perform such a search, we actually rely on the solution provided by other users, shortcutting the troubleshooting process, and with the risk of repeating the same syntagma, it comes with negative but also positive implications. For example, a negative aspect is that people don’t learn how to troubleshoot by themselves relying instead on ready-available solution, while a positive aspect is that less time is spent within troubleshooting process, at least in theory. Actually, considering the mentioned positive aspect, that’s also why I consider as important the “looking back” step, and I’m referring especially at documenting the issue action.
 

References:
[1] G. Polya (1973) How To Solve It: A New Aspect of Mathematical Method, 2nd Ed.Stanford University.  ISBN: 0-691-08097-6.
Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.