A Software Engineer and data professional's blog on SQL, data, databases, data architectures, data management, programming, Software Engineering, Project Management, ERP implementation and other IT related topics.
28 January 2010
💎SQL Reloaded: Query Writing I (Best Practices)
- Use ANSI-compliant syntax as much as possible.
- Use aliases for all the tables.
- Design for performance reusability and security.
- Reduce unnecessary network traffic.
- Group together the attributes coming from the same table.
- Rename columns in order to avoid confusion, though don’t overreact with this practice.
- Use uniform coding style, formatting and naming conventions
- Use indexed join predicates.
- Learn about the strengths/weaknesses of each feature before using it.
- Use versioning & keep older versions. - Test the queries.
- Write unit tests first [2].
- Handle missing values (NULLs).
- Document database objects inline and specific documents (e.g. Data Dictionaries, Functional Specifications).
- Refactor code
- Use SQL tunings tools.
- Write tiny chunks of code: encapsulate formulas and business logic in functions [2], avoid inline scalar functions.
- Defensive coding: use exception handling, consider all scenarios.
Things to avoid:
- Complex expressions in search conditions [1].
- Join predicates on expressions [1].
- Expressions over columns in local predicated [1].
- Data types mismatches on join columns [1].
- Non-equality join predicates [1].
- Unnecessary outer joins [1].
- Redundant predicates [1].
- Multiple aggregations with DISTINCT.
- Build queries dynamically unless necessary.
- Techniques that use full-table scan: functions that don’t use/perform poor on indexes, wildcards at the beginning of a word.
- Use more attributes/records than needed (particular case: Use * instead of specifying the attributes).
- Using nested views.
- Rely entirely on the code created by wizards and other automation tools.
- UNION unless really needed: use UNION ALL.
- UNIONS instead of conditional-base code (e.g. CASE, DECODE) or self-joins.
- Using temporary tables.
- Server side cursors.
- Procedural queries (e.g. loops, cursors) rather than using set-based queries.
- Redundant logic/code.
- Negations on constraints.
- Code facilitating SQL injection: use parameterized objects.
- Hard-coding values.
- Undocumented functionality.
- Use GROUP BY on final sub-query when it could be used in a sub-query.
- Multiple self-joins/joins to same table instead of GROUP BY.
- Repetitive calls to the same function and same parameters.
- Use constants in ORDER BY clause.
- Create too many versions of the same query.
Note:
Please note that there are situations and situations, a technique not recommended in general could prove to offer better performance than alternatives (e.g. : recursive simulation + temporary table vs. hierarchical self-joins on SQL Server 2000 ), while for others there are several aspects that need to be considered, for example the trade in performance vs. reusability. Even if most of the database vendors adhere to SQL ANSI standard, in the end each functionality could be implemented differently and vendors could provide additional functionality, therefore could be considered specific best practices for each functionality/vendor.
References: [1] IBM. (????).Best Practice - Writing and Tuning Queries for Optimal Performance [Online] Available from: http://www.ibm.com/developerworks/wikis/display/data/Best+Practice+-+Writing+and+Tuning+Queries+for+Optimal+Performance (Accessed: 27 January 2010)
[2] Oracle. (2009). Cleaning Up PL/SQL Practices, by S. Feuerstein. [Online] Available from: http://www.oracle.com/technology/oramag/oracle/04-mar/o24tech_plsql.html (Accessed: 27 January 2010)
Resources:
Microsoft TechNet. (2010). SQL Server - Best Practices. [Online] Available from: http://technet.microsoft.com/en-us/sqlserver/bb671430.aspx (Accessed: 27 January 2010)
27 January 2010
🧭Business Intelligence: Cognos & SQL Server 2008 - The First Report (Test Drive)
Now I was ready to test the tool and create several reports, for this needing to create first a data source - a connection to an existing database with several tables on which the reports will be based. SQL Server 2008's AdventureWorks database seemed to be adequate for this test, in plus I know there are many SQL Server professionals already using it for testing or tutorials.
Step 1: Creating the SQL Server account
a. the Login name (e.g. cognosuser)
b. the authentification – because Cognos doesn’t supports Windows-based authentication, choose SQL Server authentication, enter the Password and confirm it. By default the ‘Enforce password expiration’ is checked, though this option is not needed for the current requirement so unchek it.
c. Select the Default database, the database you’re intending to base your reports on (e.g. AdventureWorks)
d. Select the ‘Default Language’ (e.g. English)
Navigate to ‘Use Mapping’ tab and in ‘User mapped to this login’ choose the databases you would like to give access (e.g. AdventureWorks) and for each database selected you’ll have to select the schemas in scope (e.g. HumanResources, Person, Production, Purchasing and Sales), while in’Database role membership for’ select the db_datareader database level role name, then you can click ‘Ok’.
It’s always a good idea to test the new account before attempting to use it from an external application, this allows catching the eventual issues early in the process, saving your and others’ time!
Step 2: Creating a Data Source
Step 3: Creating a Report
Instead of using Report Studio, the standard tool for designing reports, for simple reports or queries could be used instead the Query Studio, an easy-to-use authoring tool, the creation of a report resuming to a simple drag and drop. For this you’ll have to open the Query Studio from the Launch item found on the top menu, this action opening a new browser window, in a next step following to select the package (actually the data source) you want to use. In ‘Insert Data’ mode select a table from the list (e.g. Location) and drag and drop it into the right section, all table’s columns being shown together with a sample set of data.
You could double click on each column and give it a meaningful name – is a good practice to separate the words with spaces though not really necessarily. By clicking on report’s Title you could change its actual value and add a Subtitle if necessary. In ‘Change Layout’ you could change report’s layout by applying different Font, Border and Conditional Styles, much like in Excel, show row numbers by clicking the ‘Show row numbers’ from Change Layout/Set Web Page Size, etc. Save the report once you’ve done the changes, the report will look something like the one from the below image:
Step 4: Running the Report
From ‘Run Report’ Menu item you could choose to run the report mainly in three modes – ‘Run with All Data’, ‘Preview with Limited Data’ or ‘Preview with No Data’. Additionally you can export the report to PDF (Portable Document Format), several versions of Excel (2000, 2002 and 2007), CSV (Comma Separated Values) and XML (eXtensible Markup Language).
25 January 2010
🗄️Data Management: Data Quality Dimensions (Part VII: Structuredness)
Data Management Series |
In general, the term structured data refers to structures that can be easily perceived or known, that raises no doubt on structure’s delimitations. Unstructured data refers to textual data and media content (video, sound, images), in which the structural patterns even if exist they are hard to discover or not predefined, while semi-structured data refers to islands of structured data stored with unstructured data, or vice versa.
Data structuredness is important especially when is considered the processing of data with the help of machines, the correct parsing of data being highly dependent on the knowledge about the data structure, either defined beforehand or deducted. The more structured the data and the more evident and standardized the structure, the easier should be to process the data. Merrill Lynch estimates that 85% of the data in an organization are in unstructured form, most probably this number referring to semi-structured data too. To make such data available in a structured format is required an important volume of manual work combined eventually with reliable data/text mining techniques, a fact that reduces considerably the value of such data.
Text, relational, multidimensional, object, graph or XML-based DBMS are in theory the most easily to process, map and integrate though that might not be so simple as it looks given the different architectures vendors come with, the fact that the structures evolve over time. To bridge the structure and architectural differences, many vendors make it possible to access data over standard interfaces (e.g. ODBC), though there are also systems that provide only proprietary interfaces, making data difficult to obtain in an automated manner. There are also other types of technical issues related mainly to the different data types and data formats, though such issues can be easily overcome.
In the context of Data Quality, the structuredness dimension refers to the degree the structure in which the data are stored matches the expectations, the syntactic set of rules defining it, being considered across the whole set of records. Even a minor inadvertence in the structure of a record could lead to processing errors and unexpected behavior. The simplest example is a delimited text file - if any of the character sets used to delimit the structure of the file is available in the data itself, then there are high chances that the file will be parsed incorrectly, or the parsing will fail unless the issues are corrected.
References:
[1] Barry W Boehm et al (1978) "Characteristics of software quality"
[2] The Register (2006) "Structured data is boring and useless", by D. Nortfolk (link)
[3] P Wood (?) "Semi-structured Data"
23 January 2010
🧊🎡Data Warehousing: ETL (Part I: It's All about Synchronization)
Data Warehousing Series |
In a previous post on ETL: SSIS packages vs. SQL code I mentioned that there are three types of synchronization - of scope, business logic and data. The first type targets synchronizing the filters used to select the data, the second to synchronize the logic used in data processing, when is the case, and third to work with the same unaltered copy of the data.
The synchronization of scope is achieved by enforcing the same set of constraints to related data elements with the purpose of keeping the referential integrity between the various data elements. They involve parent-child relations like Purchase orders (PO) Headers/Lines or reference/referenced relations like PO Lines and Products or Vendors. Such relationships can involve datasets coming from distinct systems, usually involving different architecture - e.g., Products or Bill of Materials master data that could come from a Product Management Information (PMI) system.
The synchronization of business logic is usually a consequence of the synchronization of scope and concerns the set of business rules that must be applied in the logic used. For example, is need to consider Products or Vendors only for the open POs. Therefore, that would involve the duplication of logic for open POs across multiple database objects, logic that needs to be synchronized accordingly. One can reference directly the object encapsulating the logic for open POs, though that would create a recursive reference as the logic references the Products and/or Vendors as well. With one or two tricks this can be avoided.
It should be targeted to cover all three types of synchronization even if it complicates the design of the solutions, as it's recommended in general to apply defensive architecting/programming. The increase in complexity is relative if one considers the effort needed for troubleshooting, plumbing or even redesign that needs to be done to fix the issues.
🧊🎡Data Warehousing: ETL (SSIS packages vs. SQL code)
In what concerns ETL we could say there are two main philosophies - have as much of the business logic in the (ETL) package, or use the package mainly for loading data from the various sources and have the business logic in the database as SQL-based code. As always, each of the two philosophies has its own advantages and disadvantages, though I would consider a third philosophy – design for performance, reuse and maintainability, resulting thus a hybrid between the first two mentioned philosophies.
Performance
Designing for performance resumes in choosing the architecture/methods that provides the best performance individually and as a whole – either using package-based functionality, SQL-based functionality or a combination of both. In general SQL code is best suited for query-based data manipulation, while packages are better suited for sequential or workflow-based processing of data, though there could be exceptions and exceptions. Often it’s a good idea to test the performance of all alternative approaches via a prototype, even if in time the developer arrives to a good knowledge of the methods that suit best from a performance point of view.
Reuse
Each of the two architectures allow a lower or higher degree of reuse using parameters, variables and compartmenting of code, maybe with a plus for SQL code which has in theory a greater maturity and flexibility than the package-based functionality, allowing a wider range of reuse resulted from the compartmenting of code in the various supported objects (stored procedures, functions, views or tables).
Maintainability
Maintainability, the easiness of modifying packages or code, is an important factor because few are the cases in which the logic doesn’t change over time, many projects having to deal with change in requirements, sometimes implying a 180 degrees change of the overall perspective. Therefore it's important to be able to modify the code/package and even change the architecture with a minimum of effort.
Refactoring
Refactoring resumes in modifying the code without changing the functional behavior in order to improve the performance, readability, maintainability or extensibility, minimize the use of resources, remove redundant code, adhere to standards, best practices or naming conventions. It is said that there is always place for improvement, performance being the dimension with the most important impact in the world of databases. Refactoring is not always necessary, though it’s a good practice for achieving high quality solutions.
Synchronization
I would say there are three types of synchronization – of scope, business logic and data, the first targeting to synchronize the filters used to select the data, the second to synchronize the logic used in data processing, when is the case, and third to work with the same unaltered copy of the data. Considering the example of Invoices – Headers and Lines, the synchronization of scope would resume to apply the same constraints for the two, assuring that there is no Invoice Header without Lines, and vice-versa; the data synchronization between the two referring to the fact that the data between the two data sets should be consistent, there should be no change in Invoice Headers not reflected also at Line level (e.g. total amount matching between Lines and Header). Business logic synchronization refers typically to the use of the same set of data for similar purposes, if several transformations were used for Invoice Headers, they should be reflected accordingly also at Invoice Line level. Synchronization it’s actually quite an important topic, therefore I will reconsider it in a further post.
Testing & Troubleshooting
I find that the business logic implemented in SQL code is much easier to test than the logic implemented in packages, because in the first situation each object and step could be in theory tested individually/progressively, being thus much easier to troubleshoot. Troubleshooting packages logic can be quite complex because is not always possible to view the input/output for each intermediary step.
Complexity
Complexity is reflected in the easiness of understanding the logic broken down to pieces and as a whole. Packages are highly visual, being in theory easier to identify and understand the steps and their flow, even by non-technical people, while SQL-code might need auxiliary representations for the same purpose (e.g. data flow diagram) and need a higher level of expertize.
Security
Security is always a sensitive and complex topic, and in general it resumes to how secure is the code and sensitive information stored (e.g. user name & password, data), who has access to execute it and the context in which the code is run. This can become easily quite a complex topic, being highly dependent on the architecture used.
Stability
We can discuss about platform and design stability, which can be often a matter of perception and experience. Both SSIS and database engines could be considered as stable development environments, the later having in theory a greater maturity and flexibility, flexibility that could be easily brought to extreme creating bad coding monsters (e.g. loop calls to .dll libraries), impacting thus design’s stability, which is correlated to the adequate use of functionality, techniques and resources – each technology has its does and don’ts, strength and weakness.
Deployment
Deployment of business logic on the server can be quite easy or quite complex, depending on the overall architecture, the number of configurable items in scope, and the complexity of the dependencies existing between them. The deployment usually resumes at copying the code from one location to another, installing the eventual dependencies and configuring the objects for use.
Scalability
Scalability in this context refers mainly to the degree the business logic can cope with the increased volume of records, and not necessarily with the number of requests, though this aspect could be considered too, after case. SSIS and the database engines are designed to be highly scalable, though there are architectures and architectures, good uses and misuses of techniques. Designing for performance in theory equates with good scalability, unless the requirements makes it difficult to have a scalable solution.
Learning curve
The learning curve of technologies is always an important factor that needs to be considered in development, as it reflects how much time an average developer needs to master the basic/average/complex functionality provided by the respective technology. For ETL development is in general requested an average knowledge of both - SSIS architecture, respectively SQL-based programming - though it’s not easy to acquire both problem-solving mindsets. A SSIS developer usually attempts using as much as possible the functionality provided by SSIS, while a SQL developer the SQL-based functionality. In the end, it’s important to know how to balance between the two.
19 January 2010
🧭Business Intelligence: Enterprise Reporting (Part III: Levels of Accuracy)
Business Intelligence |
When considering the different levels of accuracy/detail within reports, given two modules/entities A and B, we might end up created a set of reports with:
a. all records from A
b. all records from B;
c. all records from A and matching records from B, and vice-versa;
d. all records from A and aggregated data from B when the level of detail is changed.
e. aggregated data from A vs. aggregated data from B in case of many-to-many cardinality, the aggregation being made at a level of detail in which the amounts are not duplicated.
f. mismatches between A and B for the same scope (it can be shown at different levels of details).
These scenarios would allow Users to choose the report with the needed level of detail/accuracy, though covering all existing scenarios could be quite expensive, not to neglect the fact that there are reports spanning more than 2 modules. On the other side, two reports for A and B would be sufficient as long as the reference attributes between modules are provided, falling in Users’ task to match and aggregate the data, though also this alternative could prove to be problematic because of the volume of data, synchronization issues between data sets or lack of adequate skill set, and all related issues. The first solution is the ideal, the second is workable, though in the end it depends what makes the users happy.
⛏️Data Management: Consistency (Definitions)
18 January 2010
🗄️Data Management: Data Quality Dimensions (Part VI: Referential Integrity)
Data Management Series |
During conversions or data migrations is important to assure that the various sets loaded match the referential and domain integrity of the database in which the data will be loaded, otherwise the records not respecting the mentioned type of integrity will be rejected. The rejection itself might not be a problem for several records, though when it happens at large scale, then the situations changes dramatically, especially when the system gives no adequate messages for the cause or rejection. A recommended approach is to assure that the scope is synchronized between the various data elements, and that the referential integrity of datasets is validated before the data are loaded in the destination database.
Note:
Expect the unforeseeable! It’s always a good idea to check whether the referential integrity is kept by a system – there are so many things that could go wrong! In data migration solutions, data warehouses and more general analytical solutions is a good idea to have in place mechanisms that check for this kind of issues.
References:
[1] Halpin. T. (2001) Information Modeling and Relational Databases: From Conceptual Analysis to Logical Design. Morgan Kaufmann Publishers. ISBN 1-55860-672-6.
[2] MSDN. 2009. Data Integrity. [Online] Available from: http://msdn.microsoft.com/en-us/library/ms184276.aspx (Accessed: 18 January 2009)
🗄️Data Management: Data Quality Dimensions (Part V: Consistency)
References:
[1] Chapman A.D. (2005) "Principles of Data Quality", version 1.0. Report for the Global Biodiversity Information Facility, Copenhagen
[2] David Loshin (2009) Master Data Management
⛏️Data Management: Accuracy (Definitions)
"(1) A qualitative assessment of correctness, or freedom from error. (2) A quantitative measure of the magnitude of error." (IEEE, "IEEE Standard Glossary of Software Engineering Terminology", 1990)
[accuracy (of measurement):] "Closeness of the agreement between the result of a measurement and a true value of the measurand." International Vocabulary of Basic and General Terms in Metrology, 1993)
"A qualitative assessment of freedom from error or a quantitative measure of the magnitude of error, expressed as a function of relative error." (William H Inmon, "Building the Data Warehouse", 2005)
"Accuracy is the closeness of a measured value to the true value."
"A data element’s degree of conformity to an established business measurement or definition. Data precision is the degree to which further measurements or definitions will show the same results." (Jill Dyché & Evan Levy, "Customer Data Integration: Reaching a Single Version of the Truth", 2006)
"Degree of conformity of a measure to a standard or a true value. Level of precision or detail." (Martin J Eppler, "Managing Information Quality" 2nd Ed., 2006)
"The accuracy reflects the number of times the model is correct." (Glenn J Myatt, "Making Sense of Data: A Practical Guide to Exploratory Data Analysis and Data Mining", 2006)
"An aspect of numerical data quality connected with a standard statistical error between a real parameter value and the corresponding value given by the data. Data accuracy is inversely proportional to this error." (Juliusz L Kulikowski, "Data Quality Assessment", 2009)
"An inherent quality characteristic that is a measure of the degree to which data agrees with an original source of data (such as a form, document, or unaltered electronic data) received from an acknowledged source outside the control of the organization." (David C Hay, "Data Model Patterns: A Metadata Map", 2010) [accuracy in regard to a surrogate source]
"An inherent quality characteristic that is a measure of the degree to which data accurately reflects the real-world object or event being described. Accuracy is the highest degree of inherent information quality possible." (David C Hay, "Data Model Patterns: A Metadata Map", 2010) [accuracy in regard to reality]
"Freedom from mistakes or error, conformity to truth or to a standard, exactness, the degree of conformity of a measure to a standard or true value. (Michael Brackett, 2011)
"The degree to which a data attribute value closely and correctly describes its business entity instance (the 'real life' entities) as of a point in time." (DAMA International, "The DAMA Dictionary of Data Management", 2011)
"Accuracy is the quality or state of being correct or precise; accurate information is correct in all details (NOAD)." (Laura Sebastian-Coleman, "Measuring Data Quality for Ongoing Improvement ", 2012)
"Within the quality management system, accuracy is an assessment of correctness." (For Dummies, "PMP Certification All-in-One For Dummies" 2nd Ed., 2013)
"How closely a measurement or assessment reflects the true value. Not to be confused with precision [...]" (Kenneth A Shaw, "Integrated Management of Processes and Information", 2013)
"Accuracy is defined as a measure of whether the value of a given data element is correct and reflects the real world as viewed by a valid real-world source (SME, customer, hard-copy record, etc.)." (Rajesh Jugulum, "Competing with High Quality Data", 2014)
"Within the quality management system, accuracy is an assessment of correctness." (Project Management Institute, "A Guide to the Project Management Body of Knowledge (PMBOK® Guide)" 6th Ed., 2017)
"The degree to which the data reflect the truth or reality. A spelling mistake is a good example of inaccurate data." (Piethein Strengholt, "Data Management at Scale", 2020)
"The degree to which the semantic assertions of a model are accepted to be true." (Panos Alexopoulos, "Semantic Modeling for Data", 2020)
"The degree of how closely the data represents the true value of the attribute in the real-world context." (Zhamak Dehghani, "Data Mesh: Delivering Data-Driven Value at Scale", 2021)
"Closeness of computations or estimates to the exact or true values that the statistics were intended to measure." (SDMX)
"The capability of the software product to provide the right or agreed results or effects with the needed degree of precision." [ISO/IEC 25000]
"The closeness of agreement between an observed value and an accepted reference value." (American Society for Quality)
"The term “accuracy” refers to the degree to which information accurately reflects an event or object described." (Precisely) [source]
17 January 2010
🗄️Data Management: Data Quality Dimensions (Part IV: Accuracy)
Another concept related with the ones of accuracy and measurement systems is the one of precision, and it refers to degree repeated measurements under unchanged conditions lead to the same results, further concepts associated with it being the ones of repeatability and reproducibility. Even if the accuracy and precision concepts are often confounded a measurement system can be accurate but not precise or precise but not accurate (see the target analogy), a valid measurement system targeting thus both aspects. Accuracy and precision can be considered dimensions of correctedness.
Coming back to accuracy and its use in determining data quality, typically accuracy it’s strong related to the measurement tools used, for this being needed to do again the measurements for all or a sample of the dataset and identify whether the requested level of accuracy is met, approach that could involve quite an effort. The accuracy depends also on whether the systems used to store the data are designed to store the data at the requested level of accuracy, fact reflected by the characteristics of data types used (e.g. precision, length).
Given the fact that a system stores related data (e.g. weight, height, width, length) that could satisfy physical, business of common-sense rules could be used rules to check whether the data satisfy them with the desired level of approximation. For example, being known the height, width, length and the composition of a material (e.g. metal bar) could be determined the approximated weight and compared with entered weight, if the difference is not inside of a certain interval then most probably one of the values were incorrect entered. There are even simpler rules that might apply, for example the physical dimensions must be positive real values, or in a generalized formulation - involve maximal or minimal limits that lead to identification of outliers, etc. In fact, most of the time determining data accuracy resumes only at defining possible value intervals, though there will be also cases in which for this purpose are built complex models and specific techniques.
There is another important aspect related to accuracy, time dependency of data – whether the data changes or not with time. Data currency or actuality refers to the extent data are actual. Given the above definition for accuracy, currency could be considered as a special type of accuracy because when the data are not actual then they don’t reflect reality. If currency is considered as a standalone data quality dimension, then accuracy refers only to the data that are not time dependent.
🧭Business Intelligence: Enterprise Reporting (Part II: Levels of Detail)
Between two tables and extensively two data sets the relations could have any of the four types of cardinality – many-to-many (represented as m:n), one-to-one (1:1), one-to-many (1:n) and the reverse many-to-one (n:1). It could be given a definition for each of the cardinality types, though it’s easier to remember that in the x-to-y compounds, between two tables or data sets A and B, x has the value ‘one’ when the number of references from table B to any record from A is maximum 1 (there could be records not referenced at all), and ‘many’ when at least a record could be referenced more than once; the same logic applies for y but inversing the tables’ perspective.
The level of detail (LOD) of a report (aka data granularity) is directly correlated with the changes in cardinality - by adding a data set to an existing data set, if the cardinality of one-to-many and many-to-many is implied then the level of detail changes too. In other words if a record in a given data set is referenced more than once in the new added table then the LOD changes. A simple example of change of LOD occurs in parent-child/master-details relations. For example if the Invoice Lines are added to a data set based on Invoice Header then the LOD changes from Invoice Headers to Invoice Lines. If the Payments are added to the resulted data set then the LOD changes from Invoice Lines to Payments only if there are multiple Payments for an Invoice (one-to-many or many-to-many cardinalities), otherwise LOD remains the same.
Summarizing, the level of detail of a report is the lowest level at which a cardinality change occurs at entity level. It can be discussed about lower or higher LOD in relation to a given level of detail, for example Invoice Payments have a lower LOD than Invoice Lines, while Invoice Header a higher LOD.
Why is it necessary to introduce the LOD especially when considering a relatively complex notion as cardinality?! It worth defining it mainly because the term is used when defining/mitigating reporting requirements, its use being intuitive rather than well-defined and understood. When creating reports it’s important to find the adequate LOD for a report and know what methods can be used in order to pull data that would normally change reports’ LOD without actually changing reports’ LOD.
The limitations imposed by the need to report at a certain LOD higher than the one implied by the row data can be overcome with the help of aggregate functions used with GROUP BY constructs. Such constructs make it possible to bring into a report data found at lower LOD than the reporting LOD by grouping the data based on a set of attributes. The aggregate functions provide functionality to calculate for a set of values the maximum, minimum, sum, count of records, standard deviation, and other statistical values, all of them working on numeric data types, while maximum/minimum and count of records work also with dates or alphanumeric data types.
For example using aggregate functions the Invoice amounts can be aggregated and shown at Invoice Header level, however, in contrast, it’s not possible to do the same with quantities, because typically each Invoice Line refers to a specific Product, and as apples can’t be counted with peaches, this means that in order to see the quantities, the level of detail has to be changed from Invoice Header to Invoice Line. The same technique could be applied to similar parent-child (master-details) relations covering one-to-many or one-to-one cardinality, and also many-to-many relations that involve higher reporting complexity.
Direct many-to-many relations are seldom, involving mainly data sets, the attributes used in relation appearing more than once in each dataset (at least once). Taking an example from ERP world, there could be multiple Receipts and Invoices for the same Purchase Order, thus if there is no direct match between Receipts and Invoices to identify uniquely which Invoice belong to each Receipt, a report involving all the three entities could be barely usable, records being duplicated (e.g. 2 Invoices vs. 3 Receipts result in 6 records in a such scenario). For such reports to be usable the LOD needs to be change at a higher level at least on one side, thus reconsidering the mentioned example a report could show the Purchase Order details and aggregating the Invoice & Receipt Quantities/Amounts at Purchase Order level, or show detailed Invoices with aggregated Receipt information, respectively detailed Receipts with aggregated Invoice information.
Unfortunately even if the aggregate functions are quite handy they have their own limitations, being difficult to use them in order to answer to questions like “what was the last product sold for each customer”, “which was latest invoice placed for each customer”, at least not without considerable effort from developer’s side. Fortunately database vendors implemented their own more specialized type of aggregate functions - analytic functions in Oracle and window functions in SQL Server, they allowing modeling such questions with a report. The downside for developers is that each database vendor comes with its own philosophy, so techniques and features working in one database might not work in another.
15 January 2010
🕋Data Warehousing: Dimension Hierarchy (Definitions)
"An arrangement of members of a dimension into levels based on parent-child relationships, such as Year, Quarter, Month, and Day or Country, Region, State or Province, and City. Members in a hierarchy are arranged from more general to more specific." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)
"A navigation path that allows the user to move from summarized to detailed information with each level of the hierarchy represented by a different attribute." (Reed Jacobsen & Stacia Misner, "Microsoft SQL Server 2005 Analysis Services Step by Step", 2006)
"A series of master-detail relationships within a dimension. Many front-end tools require hierarchy definitions in order to support drilling features. Some aggregate navigation and construction tools, such as Oracle's materialized views, require explicit declaration of a hierarchy if a dimension is to be partially summarized by an aggregate fact table." (Christopher Adamson, "Mastering Data Warehouse Aggregates", 2006)
"A logical tree structure that organizes the members of a dimension such that each member has one parent member and zero or more child members." (Microsoft, "SQL Server 2012 Glossary", 2012)
14 January 2010
🗄️Data Management: Data Cleansing (Part I: An Introduction)
Data Management Series |
Step 1: Defining the scope
The scope definition resumes at the identification of elements, attributes and the records that need to be cleaned. If is intended only to improve overall data quality and nothing else, the focus will be mainly on the data elements and attributes perceived as having low data quality. For data conversion the elements in scope are defined based on the requirements of the destination system and overall business requirements are defined the attributes in scope, an exercise that needs to be performed is the data mapping between the destination on one side and the legacy system(s) and the other data sources (e.g. Excel, MS Access files) on the other side.
From case to case in scope for cleansing might be all the records available in the legacy system(s) for a given set of data elements and attributes, or only a fraction of them, usually the focus is mainly the active and open records, and eventually some historical records. By active records are referred those records not cancelled or disabled, either master and transactional data , by open records the (transactional) records not marked as closed, records on which an action is still pending (e.g. pending Invoice Receipt, Payment, etc.), while by historical records are referred the already closed transactional data (e.g. Purchase Orders, Sales Orders, etc.).
Given the importance of master data for the business they are often the starting point for cleaning, they being also the data elements with the longest issue resolution, the transactional data being in theory easier to clean. The two types of data need to be synchronized the referent data related to the transactional data in scope needs to be loaded too.
Step 2: Defining the data quality rules and error types.
The data quality rules are defined for the attributes in scope and the data quality dimensions considered that apply, typically data accuracy, completeness, conformity, consistency, duplication and referential integrity. The issues are eventually further categorized and prioritized.
Step 3: Validating the data quality rules
The data quality rules are validated against the actual data sets, the output of the rules are consolidated in reports that follow to be interpreted. The number of errors are reported and stored for historical trending in case several iterations are needed for data cleaning.
Step 4: Identifying the data quality issues.
The reports are reviewed by the data owners, the issues being identified and the rules validated, further steps being performed in order to identify the root causes for the discovered issues. This usually leads to the calibration of data quality rules as new scenarios (e.g. exceptions, new error types) can be discovered.
Step 5: Correcting the data quality issues.
The issues are mitigated and corrected, issues’ resolution time, the time needed to correct the issues, is directly dependent on the volume of issues, their complexity, the number of resources allocated to the task, on whether the issues are corrected in the source or on a copy of the data, on whether the cleansing is done manually or specific software tools are used. Specific Data Cleaning software tools can help decreasing considerably the volume of manual work involved during the process, being preferred to use automatic and semiautomatic data cleansing methods whenever is possible.
Step 6: Validating the changes against the same data quality rules.
Once the issues are corrected the data sets are validated against the same rules used at Step 3, the Steps 3 to 5 being repeated until the expected data quality level is achieved. As already highlighted in a previous post, Data Cleansing is not a one-time endeavor, but an iterative process, new iterations being requested by changes occurred in rules or in scope, by the sequencing/scheduling of cleansing work or by reporting frequency (centered or not along the important milestones).
Step 7: Documenting/Reporting error results and validation outcomes, if needed.
Starting with data elements, attributes, and the logic used to select the records in scope for cleaning, and ending with the data quality reports with the issues and the improvement trends it makes sense to document everything and summarize it in a set of best practices and lesson learned sessions.
Step 8: Modify processes/procedures to reduce issues occurrence.
There are chances to discover that the errors found during validation could be avoided by modifying the existing processes, procedures or standards, or enforcing new validation rules in the legacy systems. When the changes are complex might be started even individual projects to address them.
13 January 2010
🧭Business Intelligence: Reports Types
Standard reports – reports that are coming with a software application/package, as opposed of custom reports, reports created on customers’ request.
Ad-hoc reports – reports built usually to satisfy one-time requests, though they can easily evolve to a standard report.
Graphic reports- reports providing graphical visualization of data with the help of charts
Transactional reports (OLAP reports) – reports built in transactional systems, containing up-to-date data.
Analytic reports (OLAP reports) – reports built in an OLAP environment, containing data desynchronized from the OLTP environment, the data being refreshed on a periodic basis.
Predictive reports – reports relying on powerful DM models and predictive technique.
Parameterized reports – reports whose output is based on a set of predefined parameters.
Linked reports – reports that provide an access point to other reports.
Snapshot reports – reports that contain data retrieved at a specific point of time.
Cached reports – reports saved in order to improve the performance by reducing the number of requests to the database/report engine.
Click-through reports – reports whose display is based on interactive data selection
Drilldown reports – a set of reports on the same topic showing data at different levels of details, the navigation being made from higher to lower level of details.
Drill-through reports – reports accessible through a hyperlink from the original report.
Sub-reports – a report contained in the body of another report, allowing for example the display of parent/child or header/lines relations.
Metric-based reports – reports supposed to encompass the various types of business metrics; they can be further categorized in:
Health Metrics – reports designed to show the health of a system in terms of its usage and the adherence to the processes defined.
Growth Metrics - reports designed to show the growth of a system in terms of data, transaction or amount volume.
KPI (Key Process Indicator) reports – reports designed to measure an organization progress towards set organizational goals.
LPI (Lean Process Indicator) reports – reports designed to reflect business’ progress toward Lean Management organizational goals.
Dashboards – reports offering an eye-bird view of several key performance indicators.
Another characterization of reports can be based on the functional department for which the report is created, thus we can speak of financial reports, operational reports, sourcing reports, (global) supply chain reports, marketing reports, maintenance reports, etc.
Note:
The term of financial report might refer in special to financial statements.
About Me
- Adrian
- Koeln, NRW, Germany
- IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.