Showing posts with label schemas. Show all posts
Showing posts with label schemas. Show all posts

03 March 2023

Data Warehousing: Building a Modern Data Warehouse with Azure Synapse

Data Warehousing

Introduction

When building a data warehouse (DWH) several key words or derivatives of them appear in requirements: secure, flexible, simple, scalable, reliable, performant, non-redundant, modern, automated, real-timed, etc. As it proves in practice, all these requirements are sometimes challenging to address with the increased complexity of the architecture chosen. There are so many technologies on the DWH market promising all these at low costs, low effort and high ROI, though DWH projects continue to fail addressing the business and technical requirements.

On a basic level for building a DWH is needed a data storage layer and an ETL (Extract, Transfer, Load) tool responsible for the data movement between the various source systems and DWH, and eventually within the DWH itself. After that, each technology added to the landscape tends to increase the overall complexity (and should be regarded with a critical eye in what concerns the advantages and disadvantages).

Data Warehouse Architecture (on-premise)

A Reference Architecture

When building a DWH or a data migration solution, which has many of the characteristics of a DWH, from the many designs, I prefer to keep things as simple as possible.  An approach based on a performant database engine like SQL Server as storage layer and SSIS (SQL Server Integration Services) as ETL proved to be the best choice until now, allowing to address most of the technical requirements by design. Then come the choices on how and where to import and transform the data, at what level of granularity, on how the semantic layer is built, how the data are accessed, etc.

Being able to pull (see extract subprocess) the data from the data sources on a need by basis offers the most flexible approach, however there are cases in which the direct access to source data is not possible, having to rely on a push approach, where data are dumped regularly to a given location (e.g. FTP folder structure), following to be picked up as needed. It's actually a hybrid between a push and pull, because a fully push approach would mean pushing the data directly to the DWH, which can be also acceptable, though might offer lower control on data's movement and involve a few other challenges (e.g. permissions, concurrency). 

Data can be prepared for the DWH in the source systems (e.g. exposed via data objects or API calls), anywhere in between via ETL-based transformations (see transform subprocess) or directly in the DWH. I prefer importing the data (see load subprocess) 1:1 without any transformations from the various sources via SSIS (or similar technologies) into a set of tables that designated the staging area. It's true that in this way the ETL technology is used to a minimum, though unless there's a major benefit to use it for data transformations, using DWH's capabilities and SQL for data processing can provide better performance and flexibility

Besides the selection of the columns in scope (typically columns with meaningful values), it's important not to do any transformations in the extraction layer because the data is imported faster (eventually using fast load options as in SSIS) and it assures a basis for troubleshooting (as the data don't change between loads). Some filters can be applied only when the volume of data is high, and the subset of the data could be identified clearly (e.g. when data are partitioned based on a key like business unit, legal entity or creation date).

For better traceability, the staging schemas can reflect the systems they come from, the tables and the columns should have the same names, respectively same data types. On such tables no constraints are applied and no indexes are needed. They can be constructed however on the production tables (aka base tables) - copy of the tables from production. 

Some DWH architects try replicating the constraints from the source systems and/or add more constraints on top to define the various business rules. Rigor is good in some scenarios, though it can involve a considerable effort and it might be challenging to keep over time, especially when considering the impact of big data on DWH architectures. Instead of using constraints, building a set of SQL scripts that pinpoint the issues as reports allow more flexibility with the risk of having inconsistencies running wild through the reports. The data should be cleaned in the source system and not possible then properly addressed in the DWH. Applying constraints will make the data unavailable for reporting until data are corrected, while being more permissive would allow dirty data. Thus, either case has advantages or disadvantages, though the latter seems to be more appropriate. 

Indexes on the production schema should reflect the characteristics of the queries run on the data and shouldn't replicate the indexes from the source environments, even if some overlaps might exist. In practice, dropping the non-clustered indexes on the production tables before loading the data from staging, and recreating them afterwards proves to provide faster loading (see load optimization techniques). 

The production tables are used for building a "semantic" data model or something similar. Several levels of views, table-valued functions and/or indexed/materialized views allows building the dimensions and facts tables, the latter incorporating the business logic needed by the reports. Upon case, stored-procedures, physical or temporary tables, table variables can be used to prepare the data, though they tend to break the "free" flow of data as steps in-between need to be run. On the other side, in certain scenarios their use is unavoidable. 

The first level of views (aka base views) is based on the base tables without any joins, though they include only the fields in use (needed by the business) ordered and "grouped" together based on their importance or certain characteristics. The views can include conversions of data types, translations of codes into meaningful values, and quite seldom filters on the data. Based on these "base" views the second level is built, which attempts to define the dimension and fact tables at the lowest granularity. These views include joins between tables coming from the same or different systems, respectively mappings of values defined in tables, and whatever it takes to build such entities. However, transformations on individual fields are pushed, when possible, to the lower level to minimize logic redundancy. From similar reasons, the logic could be broken down over two or more "helper" views when visible benefits could be obtained from it (e.g troubleshooting, reuse, maintenance). It's important to balance between creating too many helper views and encapsulating too much logic in a view. 

One of the design principles used in building the entities is to minimize the redundance of the fields used, ideally without having columns duplicated between entities at this level. This would facilitate the traceability of columns to the source tables within the "semantic" layer (typically in the detriment of a few more joins). In practice, one is forced to replicate some columns to simplify some parts of the logic. 

Further views can be built based on the dimension and fact entities to define the logic needed by the reports. Only these objects are used and no direct reference to the "base" tables or views are made. Moreover, to offer better performance when the views can be materialized or, when there's an important benefit, physically saved as table (e.g. having multiple indexes for different scenarios). It's the case of entities with considerable data volume called over and over. 

This approach of building the entities is usually flexible enough to address most of the reporting requirements, independently whether the technical solution has the characteristics of a DWH, data mart or data migration layer. Moreover, the overall architectural approach can be used on-premise as well in cloud architectures, where Azure SQL Server and ADF (Azure Data Factory) provide similar capabilities. Compared with standard SQL Server, some features might not be available, while other features might bring further benefits, though the gaps should be neglectable.

Data Management topics like Master Data Management (MDM), Data Quality Management (DQM) and/or Metadata Management can be addressed as well by using third-party tools or tools from the Microsoft stack - Master Data Services (MDS) and Data Quality Services (DQS) in combination with SSIS help addressing a wide range of scenarios - however these are optional. 

Moving to the Cloud

Within the context of big data, characterized by (high/variable) volume, value, variety, velocity, veracity, and further less important V's, the before technical requirements still apply, however within a cloud environment the overall architecture becomes more complex. Each component becomes a service. There are thus various services for data ingestion, storage, processing, sharing, collaboration, etc. The way data are processed involves also several important transformations: ETL becomes ELT, FTP and local storage by Data Lakes, data packages by data pipelines, stateful by stateless, SMP (Symmetric Multi-Processing) by MPP (Massive Parallel Processing), and so on.

As file storage is less expensive than database storage, there's an increasing trend of dumping business critical data into the Data Lake via data pipelines or features like Link to Data Lake or Export to Data Lake (*), which synchronize the data between source systems and Data Lake in near real-time at table or entity level. Either saved as csv, parquet, delta lake or any other standard file format, in single files or partitions, the data can be used directly or indirectly for analytics.

Cloud-native warehouses allow addressing topics like scalability, elasticity, fault-tolerance and performance by design, though further challenges appear as compute needs to be decoupled from storage, the workloads need to be estimated for assuring the performance, data may be distributed across data centers spanning geographies, the infrastructure is exposed to attacks, etc. 

Azure Synapse

If one wants to take advantage of the MPP architecture's power, Microsoft provides an analytical architecture based on Azure Synapse, an analytics service that brings together data integration, enterprise DWH, and big data analytics. Besides two types of SQL-based data processing services  (dedicated vs serverless SQL pools) it comes also with a Spark pool for in-memory cluster computing.

A DWH based on Azure Synapse is not that different from the reference architecture described above for an on-premise solution. Actually, a DWH based on a dedicated SQL pool (aka a physical data warehouse) involves the same steps mentioned above. 

Data Warehouse Architecture with Dedicated SQL Pool

The data can be imported via ETL/ELT pipelines in the DWH, though there are also mechanisms for consuming the data directly from the files stored in the Data Lake or Azure storage. CETAS (aka Create External Table as Select) can be defined on top of the data files, the external tables acting as "staging" or "base" tables in the architecture described above. When using a dedicated SQL pool it makes sense to use the CETAS as "staging" tables, the processed data following to be dumped to "optimized" physical tables for consumption and refreshed periodically. However, when this happens the near real-time character of data is lost. Using the CETAs as base tables would keep this characteristic as long the data isn't saved physically in tables or files, maybe in the detriment of performance.

Using a dedicated SQL pool for direct reporting can become expensive as the pool needs to be available at least during business hours for incoming user requests, or at least for importing the data and refreshing the datasets. When using the CETAS as a base table, a serverless (aka on-demand) SQL pool, which uses a per-pay-use billing model could prove to be more cost-effective and flexible in many scenarios. By design, it helps to keep the near real-time character of the data. Moreover, even if the data are actually moved from the source tables into the Data Lake, this architecture has the characteristics of a logical data warehouse:

Data Warehouse Architecture with Serverless SQL Pool

Unfortunately, unless one uses Spark tables, misuses views or adds an Azure SQL database to the architecture, there are no physical tables or materialized views in a serverless SQL pool. There's still the option to use data pipelines for regullarly exporting intermediary data to files (incl. over partitions or folders), even if this involves more overhead as it's not possible to export data over SQL syntax to files more than once (though this might change in the future). For certain scenario it could be useful to store data in a Azure SQL Server or similar database, including a dedicated SQL pool. 

Choosing between serverless and dedicated SQL pool is not an exclusive choice, both or all 3 types of pools (if we consider also the Spark pool) can be used in the architecture for addressing specific challenges, especially when we consider that there are important differences between the features available in each of the pools. Moreover, one can start the PoC based on the serverless SQL pool and when the solution became mature enough and used in all enterprise, parts of the logic or all of it can be migrated to a dedicated SQL pool. This would allow to save costs at the beginning in the detriment of further effort later. 

Talking about the physical storage, data engineers recommend defining within a Data Lake several layers (aka regions, zones) labeled as bronze, silver and gold (and probably platinum will join the club anytime soon). The bronze layer refers to the raw data available in the Data Lake, including the files on which the initial CETAS are defined upon. The silver refers to transformed, cleaned, enriched and integrated data, data resulting from the second layer of views described above. The gold layer refers to the data to which business logic was applied and prepared for consumption, data resulting from the final layer of views. Of course, data pipelines can be used to prepare the data at these stages, though a view-based approach offers more flexibility, are easier to troubleshoot, manage and reuse than data pipelines.

Ideally the gold data should involve no or minimal further transformation before reaching the users, though that's not realistic. Building a DWH takes a considerable time and the business can't usually wait until everything is in place. Therefore, reports based on DWH will continue to coexist with reports directly accessing the source data, which will lead to controversies. Enforcing a single source of truth will help to minimize the gap, though will not eliminate it completely. 

Closing Notes

These are just outlines of a minimal reference architecture. There's more to consider, as there are several alternatives (see [1] [2] [3] [4]) for each of the steps considered in here, each technology, new features or mechanisms opening new opportunities. The advantages and disadvantages should be always considered against the business needs and requirements. One approach, even if recommended, might not work for all, though unless there's an important requirement or an opportunity associated with an additional technology, deviating from reference architectures might not be such a good idea afterall.

Note:
(*) Existing customers have until 1-Nov-2024 to transition from Export to Data lake to Synapse link. Microsoft advises new customers to use Synapse Link. 


Resources:
[1] Microsoft Learn (2022) Modern data warehouse for small and medium business (link)
[2] Microsoft Learn (2022) Data warehousing and analytics (link)
[3] Microsoft Learn (2022) Enterprise business intelligence (link)
[4] Microsoft Learn (2022) Serverless Modern Data Warehouse Sample using Azure Synapse Analytics and Power BI (link)
[5] Coursera (2023) Data Warehousing with Microsoft Azure Synapse Analytics (link) [course, free to audit]
[6] SQLBits (2020) Mahesh Balija's Building Modern Data Warehouse with Azure Synapse Analytics (link)
[7] Matt How (2020) The Modern Data Warehouse in Azure: Building with Speed and Agility on Microsoft’s Cloud Platform (Amazon)
[8] James Serra's blog (2022) Data lake architecture (link)
[9] SQL Stijn (2022) SQL Building a Modern Lakehouse Data Warehouse with Azure Synapse Analytics: Moving your Database to the lake (link)
[10] Solliance (2022) Azure Synapse Analytics Workshop 400 (link) [GitHub repository]

11 March 2021

Microsoft Azure: Azure Data Factory (Notes)

Azure Data Factory - Concept Map

Acronyms:
Azure Data Factory (ADF)
Continuous Integration/Continuous Deployment (CI/CD)
Extract Load Transform (ELT)
Extract Transform Load (ETL)
Independent Software Vendors (ISVs)
Operations Management Suite (OMS)
pay-as-you-go (PAYG)
SQL Server Integration Services (SSIS)

Resources:
[1] Microsoft (2020) "Microsoft Business Intelligence and Information Management: Design Guidance", by Rod College
[2] Microsoft (2021) Azure Data Factory [source]
[3] Microsoft (2018) Azure Data Factory: Data Integration in the Cloud [source]
[4] Microsoft (2021) Integrate data with Azure Data Factory or Azure Synapse Pipeline [source]
[10] Coursera (2021) Data Processing with Azure [source]
[11] Sudhir Rawat & Abhishek Narain (2019) "Understanding Azure Data Factory: Operationalizing Big Data and Advanced Analytics Solutions"

04 February 2021

Data Migrations (DM): Conceptualization VII (Data Import Layer)

Data Migration
Data Migrations Series

The data requirements for the Data Migration (DM) and Data Quality (DQ) are driven by the processes implemented in the target system(s). Therefore, a good knowledge of these requirements can decrease the effort needed for these two subprojects considerably. The needed knowledge basis starts with the entities and their attributes, the dependencies existing between them and the various rules that apply, and ends with the parametrization requirements, respectively the architecture(s) that can be used to import the data.

The DM process starts with defining the entities in scope and their attributes, respectively identifying the corresponding entities and attributes from the legacy systems. The attributes not having a correspondent in the legacy system need to be provided by the business and integrated in the DM logic. In addition, it’s needed to consider also the attributes needed by the business and not available in the target system, some of them more likely available in the legacy systems. For such attributes is needed either to misuse an attribute from the target or to extend the target system.

For each entity is created a data mapping that basically documents the data transformations needed for migrating the data. In the process is needed to consider also attributes’ data types, the (standard) formatting, their domain of definition, as well the various rules that apply. Their implementation belongs into the DM layer from which the data are exported in a standard format as needed by the target system.

Exporting the data from the DM layer directly into the target system’s tables has in theory the lowest overhead even if the rejected records are difficult to track, the rejections resulting only from records’ ‘validation against database’s schema. For this approach to work, one must have a good knowledge of the database schema and of the business rules implemented into the target system.

To solve the issue with errors’ logging, systems have a further layer on top of the database model, which also allow running data validation against target system’s business rules. Modern import frameworks allow loading the data via a set of standard files with a predefined structure. The data can be thus imported manually or via load jobs into the system a log with the issues being generated in the process. Some frameworks allow even the manual editing of failed records, respectively to import the data. Unfortunately, calling the layer from the DM layer is not possible from a database, though this would bring seldom a benefit. Some third-party tools attempt to improve the import functionality by calling the target system’s import layer.

The import files must be generated from the DM layer in the required structure with the appropriate formatting. The challenge however resides in identifying all the attributes that should make scope of the load. It’s an iterative process which sometimes is backed by try-and-error heuristics. Unless target system’s validation rules are known beforehand, the rules need to be discovered in this process, which can prove time-consuming. The discoveries need to be integrated also in the DM and from here results the big number of changes that need to be performed.

Given the dependencies existing between entities the files need to be generated and loaded in a predefined order. These dependencies are reflected also in the data processing and the validation rules considered in the DM layer.

A quality checkpoint can be implemented between the export from the DM layer and import to enforce the four-eyes principle. It’s normally the last opportunity for trapping the eventual issues. A further quality check is performed after import by validating on whether the data were imported as expected.

Previous Post <<||>> Next Post

30 December 2020

Data Warehousing: ETL (The Transform Subprocess)

Data Warehousing

As part of the ETL process, the Transform subprocess is responsible for bridging the gap between source and destination by leveraging SQL or the rich set of (data) transformations available in ETL tools, either to enable the implicit or explicit conversion between source and destination data types, or to transform the data as needed. 

Transformations act on data as operators, the challenge being to transform the data in the smallest number of steps in the most efficient way. Some of the transformations available in the ETL tools (e.g. conversions, sorting, sampling, joins, lookups, aggregation, pivoting, unpivoting) can be replaced by SQL-based logic. One can easily prepare the data directly in the extraction query, taking thus advantage of the power provided by the database engines. Moreover, the logic can be encapsulated in views or other objects and called as required by the extraction logic when the source database allows it. This approach allows maintaining the logic independently of the ETL packages.

Unfortunately, SQL can replace the transformations that address sequential logic and not workflow-related logic (e.g. conditional splitd, merges, multicasts, slowly changing dimensions) or logic that includes certain computational complexity (e.g. fuzzy groupings or lookups). Such gaps need to be filled by the ETL tools via the built-in transformations, by allowing developers to build custom logic or simple use COTS solutions, when they prove capable of filling the gap. 

Copying the data 1:1 at table or entity-level from the source system(s) involves in theory the simplest transformations, transformations revolving mainly around conversions between data types. The casual troublemakers are the numeric and date values, which can be found in different formats or precisions in the various environments. As this can apply to the ETL environment itself, it’s important to consider environment-agnostic data types when possible (e.g. strings). 

Other sources for concerns are the user-defined data types which don’t have equivalents between the systems, needing thus additional transformations for further handing, respectively the invalid values which need to be handled accordingly. Besides the data from the source system(s) and the derived values, upon case one needs to consider the parameter-based or hardcoded metadata created in the process. 

Independently of the purpose of the ETL packages it is usually required to document the data flow associated with them and the rules applied in transformations in what is known as a mapping document. Such a document needs to be understandable by the business, as it can serve for Data Management, projects, or other purposes.  Even if it’s almost impossible to document everything, at minimum needs to be provided the source and destination tables, the attributes considered in the mappings, respectively the most important rules the business should be aware of. Otherwise, the technical people can always turn back to the SQL queries, when needed. 

Some sources consider each non-trivial transformation as a business rule. Even if the rules used in transformations constrain the (business) data, not each rule is relevant for the business to the degree that it constrains some part of the business.

Data Migrations involve transformations between (database) schemas. Therefore, the logic requested to move the data could be handled in theory with a few well-designed packages, though there are considerations like logic complexity, transparency, flexibility, performance or auditability which could be better handled by using other techniques (e.g. saving the data in intermediary tables, breaking down the logic in several steps). Such considerations can apply also to simple ETL packages. Therefore, it’s important to recognize such scenarios, weight the choices and choose what fits best. However, unless one knows what one’s doing, it’s recommended to use the methods one knows best. 

Previous Post <<||>> Next Post

27 December 2020

Data Warehousing: Data Vault 2.0 (The Good, the Bad and the Ugly)

Data Warehousing
Data Warehousing Series

One of the interesting concepts that seems to gain adepts in Data Warehousing is the Data Vault – a methodology, architecture and implementation for Data Warehouses (DWH) developed by Dan Linstedt between 1990 and 2000, and evolved into an open standard with the 2.0 version.

According to its creator, the Data Vault is a detail-oriented, historical tracking and uniquely linked set of normalized tables that support one or more business functional areas [2]. To hold data at the lowest grain of detail from the source system(s) and track the changes occurred in the data, it splits the fact and dimension tables into hubs (business keys), links (the relationships between business keys), satellites (descriptions of the business keys), and reference (dropdown values) tables [3], while adopting a hybrid approach between 3rd normal form and star schemas. In addition, it provides a two- or three-layered data integration architecture, a series of standards, methods and best practices supposed to facilitate its use.

It integrates several other methodologies that allow bridging the gap between the technical, logistic and execution parts of the DWH life-cycle – the PMI methodology is used for the various levels of planning and execution, while the Scrum methodology is used for coordinating the day-to-day project tasks. Six Sigma is used together with Total Quality Management for the design and continuous improvement of DWH and data-related processes. In addition, it follows the CMMI maturity model for providing a clear baseline for benchmarking an organization’s DWH capabilities in development, acquisition and service areas.

The Good: The decomposition of the source data models into hub, link and satellite tables provides traceability and auditability at raw data level, allowing thus to address the compliance requirements of Sarabanes-Oxley, HIPPA and Basel II by design.

The considered standards, methods, principles and best practices are leveraged from Software Engineering [1], establishing common ground and a standardized approach to DWH design, implementation and testing. It also narrows down the learning and implementation paths, while allowing an incremental approach to the various phases.

Data Vault 2.0 offers support for real-time, near-real-time and unstructured data, while new technologies like MapReduce, NoSQL can be integrated within its architecture, though the same can be said about other approaches as long there’s compatibility between the considered technologies. In fact, except business entities’ decomposition, many of the notions used are common to DWH design.

The Bad: Further decomposing the fact and dimension tables can impact the performance of the queries run against the tables as more joins are required to gather the data from the various tables. The further denormalization of tables can lead to higher data storage needs, though this can be neglectable compared with the volume of additional objects that need to be created in DWH. For an ERP system with a few hundred of meaningful tables the complexity can become overwhelming.

Unless one uses a COTS tool which automates some part of the design and creation process, building everything from scratch can be time-consuming, increasing thus the time-to-market for solutions. However, the COTS tools can introduce restrictions of their own, which can negatively impact the overall experience with the methodology.

The incorporation of non-technical methodologies can have positive impact, though unless one has experience with the respective methodologies, the disadvantages can easily overshadow the (theoretical) advantages.

The Ugly: The dangers of using Data Vault can be corroborated as usual with the poor understanding of the methodology, poor level of skillset or the attempt of implementing the methodology without allowing some flexibility when required. Unless one knows what he is doing, bringing more complexity in a field which is already complex, can easily impact negatively projects’ outcomes.

Previous Post <<||>> Next Post

References:
[1] Dan Linstedt & Michael Olschimke (2015) Building a Scalable Data Warehouse with Data Vault 2.0
[2] Dan Linstedt (?) Data Vault Basics [source]
[3] Dan Linstedt (2018) Data Vault: Data Modeling Specification v 2.0.2 [source]

11 May 2018

Application Support: One Database, Two Vendors, No Love Story or Maybe…

Data Warehousing

Introduction

Situation: An organization has several BI tools provisioned with data from the same data warehouse (DW), the BI infrastructure being supported by the same service provider (vendor). The organization wants to adopt a new BI technology, though for it must be brought another vendor into the picture. The data the tool requires are already available in the DW, though the DW needs to be extended with logic and other components to support the new tool. This means that two vendors will be active in the same DW, more generally in the same environment.

Question(s): What is the best approach for making this work? Which are the challenges for making it work, considering that two vendors?


Preliminary

    When you ask IT people about this situation, many will tell you that’s not a good idea, being circumspect at having two vendors within the same environment. Some will recall previous experience in which things went really bad, escalated to some degree. They will even show their disagreement through body language or increase tonality. Even if they had also good experiences with having two vendors support the same environment, the negative experiences will prevail. It’s the typical reaction to an idea when something that caused considerable trouble is recalled. This behavior is understandable as generally human tend to remember more the issues they had, rather than successes. Problems leave deeper marks than success, especially when challenges are seen as burdens.

    Reacting defensively is a result of the “I’ve been burned once” syndrome. People react adversely and tend to avoid situations in which they were burned, instead of dealing with them, instead of recognizing which were the circumstances that lead to the situation in the first place, of recognizing opportunities for healing and raising above the challenges.


   Personally, at a first glance, the caution would make me advise as well against having two or more vendors playing within same playground. I had my plate of extreme cases in which something went wrong and the vendors started acting like kids. Parents (in general people who work with children) know what I’m talking about, children don’t like to share their toys and parents often find themselves in the position of mediating between them. When the toy get’s broken it’s easy to blame other kid for it, same as somebody else must put the toy at its place, because that somebody played the last time with it. It’s a mix between I’m in charge and the blame game. Who needs that?

  At second sight, if parents made it, why wouldn’t professionals succeed in making two vendors work together? Sure, parents have more practice in dealing with kids, have such situations on a daily basis, and there are fewer variables to think about it… I have seen vendors sitting together until they come up with a solution, I’ve seen vendors open to communicate, putting the customer on the first place, even if that meant living the ego behind. Where there’s a will there’s a way.


The Solution Space

    In IT there are seldom general recipes that always lead to success, and whether a solution works or not depends on a serious of factors – environment, skills, communication, human behavior and quite often chance, the chance of doing the right thing at the right time. However, the recipe can be used as a starting point, eventually to define the best scenario, what will happen when everything goes well. At the opposite side there is the worst scenario, what will happen when everything goes south. These two opposite scenarios are in general the frame in which a solution can be defined.

    Within this frame one can add several other reference points or paths, and these are made of the experience of people handling and experiencing similar situations – what worked, what didn’t, what could work, what are the challenges, and so on. In general, people’s experience and knowledge prove to be a good estimator in decision making, and even if relative, it proves some insight into the problem at hand.


    Let’s reconsider the parents put in the situation of dealing with children fighting for the same toy, though from the perspective of all the toys available to play with. There are several options available: the kids could take (supervised) turn in playing with the toys, fact that could be a win-win situation if they are willing to cooperate. One can take the toys (temporarily) away, though this could lead to other issues. One can reaffirm who’s the owner of each toy, each kid being allowed to play only with his toy. One could buy a second toy, and thus brake the bank even if this will not make the issue entirely go away. Probably there are other solutions inventive parents might find.

    Similarly, in the above stated problem, one option, and maybe the best, is having the vendors share ownership for the DW by finding a way to work together. Defining the ownership for each tool can alleviate some of the problems but not all, same as building a second DW. We can probably all agree that taking the tools away is not the right thing to do, and even if it’s a solution, it doesn’t support the purpose.


Sharing Ownership

    Complex IT environments like the one of a DW depend on vendors’ capability of working together in reaching the same goal, even if in play are different interests. This presumes the disposition of the parties in relinquishing some control, sharing responsibilities. Unfortunately, not all vendors are willing to do that. That’s the point where imaginary obstacles are built, is where effort needs to be put to eliminate such obstacles.

    When working together, often one of the parties must play the coordinator role. In theory, this role can be played by any of the vendors, and the roles can even change after case. Another approach is when the coordinator role can be taken also by a person or a board from the customer side. In case of a data warehouse it can be an IT professional, a Project Manager or a BI Competency Center (BICC) . This would allow to smoothly coordinate the activities, as well to mediate the communication and other types of challenges faced.


    How will ownership sharing work? Let’s suppose vendor A wants to change something in the infrastructure. The change is first formulated, shortly reviewed, and approved by both vendors and customer, and will then be implemented and documented by vendor A as needed. Vendor B is involved in the process by validating the concept and reviewing the documentation, its involvement being minimized. There can be still some delays in the process, though the overhead is somehow minimized. There will be also scenarios in which vendor B needs only to be informed that a change has occurred, or sometimes is enough if a change was properly documented.

    This approach involves also a greater need for documentation, versioning, established processes, their role being to facilitate the communication and track the changes occurred in the environment.


Splitting Ownership

    Splitting the ownership involves setting clear boundaries and responsibilities within which each vendor can perform the work. One is forced thus to draw a line and say which components or activities belong to each vendor. 

    The architecture of existing solutions makes it sometimes hard to split the ownership when the architecture was not designed for it. A solution would be to redesign the whole architecture, though even then might not be possible to draw a clear line when there are grey areas. One needs eventually to consider the advantages and disadvantages and decide to which vendor the responsibility suits best.


    For example, in the context of a DW security can be enforced via schemas within same or different databases, though there are also objects (e.g. tables with basis data) used by multiple applications. One of the vendors (vendor A) will get the ownership of the objects, thus when vendor B needs a change to those table, it must require the change to vendor A. Once the changes are done the vendor B needs to validate the changes, and if there are problems further communication occurs. Per total this approach will take more time than if the vendor B would have done alone the changes. However, it works even if it comes with some challenges.

    There’s also the possibility to give vendor B temporary permissions to do the changes, fact that will shorten the effort needed. The vendor A will still be in charge, and will have to prove the documentation, and do eventually some validation as well.


Separating Ownership

    Giving each vendor its own playground is a costly solution, though it can be the only solution in certain scenarios. For example, when an architecture is supposed to replace (in time) another, or when the existing architecture has certain limitations. In the context of a DW this involves duplicating the data loads, the data themselves, as well logic, eventually processes, and so on.

    Pushing this just to solve a communication problem is the wrong thing to do. What happens if a third or a fourth vendor joins the party? Would it be for each vendor a new environment created? Hopefully, not…

    On the other side, there are also vendors that don’t want to relinquish the ownership, and will play their cards not to do it. The overhead of dealing with such issues may surpass in extremis the costs of having a second environment. In the end the final decision has the customer.


Hybrid Approach


    A hybrid between sharing and splitting ownership can prove to give the best from the two scenarios. It’s useful and even recommended to define the boundaries of work for each vendor, following to share ownership on the areas where there’s an intersection of concerns, the grey areas. For sensitive areas there could be some restrictions in cooperation.

    A hybrid solution can involve as well splitting some parts of the architecture, though the performance and security are mainly the driving factors.


Conclusion

   I wanted with this post to make the reader question some of the hot-brained decisions made when two or more vendors are involved in the supporting an architecture. Even if the problem is put within the context of a DW it’s occurrence extends far beyond this context. We are enablers and problem solvers. Instead of avoiding challenges we should better make sure that we’re removing or minimizing the risks. 

14 August 2011

Graphical Representation: Data Flow Diagram (Definitions)

"A diagram that shows the data flows in an organization, including sources of data, where data are stored, and processes that transform data." (Jan L Harrington, "Relational Database Dessign: Clearly Explained" 2nd Ed., 2002)

"A diagram of the data flow from sources through processes and files to users. A source or user is represented by a square; a data file is represented by rectangles with missing righthand edges; a process is represented by a circle or rounded rectangle; and a data flow is represented by an arrow." (Jens Mende, "Data Flow Diagram Use to Plan Empirical Research Projects", 2009)

"A diagram used in functional analysis which specifies the functions of the system, the inputs/outputs from/to external (user) entities, and the data being retrieved from or updating data stores. There are well-defined rules for specifying correct DFDs, as well as for creating hierarchies of interrelated DFDs." (Peretz Shoval & Judith Kabeli, "Functional and Object-Oriented Methodology for Analysis and Design", 2009)

[Control Data Flow Graph (CDFG):] " Represents the control flow and the data dependencies in a program." (Alexander Dreweke et al, "Text Mining in Program Code", 2009)

"A graphic method for documenting the flow of data within an organization." (Jan L Harrington, "Relational Database Design and Implementation: Clearly explained" 3rd Ed., 2009)

"A graphic representation of the interactions between different processes in an organization in terms of data flow communications among them. This may be a physical data flow diagram that describes processes and flows in terms of the mechanisms involved, a logical data flow diagram that is without any representation of the mechansm, or an essential data flow diagram that is a logical data flow diagram organized in terms of the processes that respond to each external event." (David C Hay, "Data Model Patterns: A Metadata Map", 2010)

"Data-flow diagrams (DFDs) are system models that show a functional perspective where each transformation represents a single function or process. DFDs are used to show how data flows through a sequence of processing steps." (Ian Sommerville, "Software Engineering" 9th Ed., 2011)

"A model of the system that shows the system’s processes, the data that flow between them (hence the name), and the data stores used by the processes. The data flow diagram shows the system as a network of processes, and is thought to be the most easily recognized of all the analysis models." (James Robertson et al, "Complete Systems Analysis: The Workbook, the Textbook, the Answers", 2013)

"A picture of the movement of data between external entities and the processes and data stores within a system." (Jeffrey A Hoffer et al, "Modern Systems Analysis and Design" 7th Ed., 2014)

"A schematic indicating the direction of the movement of data" (Daniel Linstedt & W H Inmon, "Data Architecture: A Primer for the Data Scientist", 2014)

"A Data Flow Diagram (DFD) is a graphical representation of the 'flow' of data through an information system, modeling its process aspects. Often it is a preliminary step used to create an overview of the system that can later be elaborated." (Henrikvon Scheel et al, "Process Concept Evolution", 2015)

"Data flow maps are tools that graphically represent the results of a comprehensive data assessment to illustrate what information comes into an organization, for what purposes that information is used, and who has access to that information." (James R Kalyvas & Michael R Overly, "Big Data: A Business and Legal Guide", 2015)

"A graphical representation of the logical or conceptual movement of data within an existing or planned system." (George Tillmann, "Usage-Driven Database Design: From Logical Data Modeling through Physical Schmea Definition", 2017)

"a visual depiction using standard symbols and conventions of the sources of, movement of, operations on, and storage of data." (Meredith Zozus, "The Data Book: Collection and Management of Research Data", 2017)

"A data-flow diagram is a way of representing a flow of data through a process or a system (usually an information system). The DFD also provides information about the outputs and inputs of each entity and the process itself." (Wikipedia) [source]

"A graphical representation of the sequence and possible changes of the state of data objects, where the state of an object is any of: creation, usage, or destruction." (IQBBA)

03 February 2010

Data Warehousing: Snowflake Schema (Definitions)

"An extension of a star schema such that one or more dimensions are defined by multiple tables. In a snowflake schema, only primary dimension tables are joined to the fact table. Additional dimension tables are joined to primary dimension tables. " (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"A variation on dimensional schema design in which dimension tables are further normalized, split into multiple tables based on hierarchies in the data." (Christopher Adamson, "Mastering Data Warehouse Aggregates", 2006)

"A single fact table which joins to many dimension tables, with each dimension normalized as two or more tables." (Reed Jacobsen & Stacia Misner, "Microsoft SQL Server 2005 Analysis Services Step by Step", 2006)

"A variation of the star schema in which the business dimensions are implemented as a set of normalized tables. The resulting diagram resembles a snowflake." (Laura Reeves, "A Manager's Guide to Data Warehousing", 2009)

"A normalized version of the STAR schema in which dimension tables are partially or fully normalized. Not generally recommended because it compromises query performance and simplicity for understanding." (Paulraj Ponniah, "Data Warehousing Fundamentals for IT Professionals", 2010)

"A star schema in which dimension tables reference other tables." (Oracle, "Database SQL Tuning Guide Glossary", 2013)

12 June 2009

DBMS: Data definition (Definitions)

"The process of setting up databases and creating database objects such as tables, indexes, rules, defaults, procedures, triggers, and views." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

"A data definition specifies the attributes, properties, and objects in a database." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"Specification of data entities, including their attributes and relationships, in a coherent database structure to create a schema." (Sharon Allen & Evan Terry, "Beginning Relational Data Modeling" 2nd Ed., 2005)

"1.Statements that specify the business meaning associated with a conceptual, logical, or physical data entity or attribute. 2.The process of creating business meta-data, including names, meanings, integrity rules, and domain values. 3.In computer programming, the statements in a computer program that specify the physical attributes of the data to be processed, such as location and quantity of data." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"The attributes, properties, and objects in a database." (Microsoft, "SQL Server 2012 Glossary", 2012)

"An executable statement where a variable is assigned a value." (Software Quality Assurance)

25 January 2009

DBMS: Schemas (Definitions)

"A description of a database generated by the data definition language (DDL) of the database management system (DBMS)." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"The structure of a database, which includes the definitions of the objects. The database schema includes the tables, columns, defaults, indexes, and relationships." (Microsoft Corporation, "Microsoft SQL Server 7.0 Data Warehouse Training Kit", 2000)

"Database schema refers to the names of tables, fields, data types, and primary and foreign keys of a database. Database schema is also known as the database structure." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"A group of related database objects assigned to a database user. A schema contains tables, views, indexes, sequences, and SQL code. The schema name can be used to qualify objects that are not owned by the user referencing the objects." (Bob Bryla, "Oracle Database Foundations", 2004)

"In the SQL-92 standard, a collection of database objects that are owned by a single user and form a single namespace. A namespace is a set of objects that cannot have duplicate names" (Thomas Moore, "EXAM CRAM™ 2: Designing and Implementing Databases with SQL Server 2000 Enterprise Edition", 2005)

"A description of data that provides context and structure. There are many types of schemas, including database schemas and XML schemas." (Tom Petrocelli, "Data Protection and Information Lifecycle Management", 2005)

"The design of the physical database that defines the tables in the database that correspond to physical data objects. The physical database schema is often referred to by CDI vendors as their data model." (Jill Dyché & Evan Levy, "Customer Data Integration: Reaching a Single Version of the Truth", 2006)

"A grouping mechanism for objects inside a database. A schema is owned by a user, and various users can be assigned permission to use objects in a schema. In SQL Server 2005 the schema is separated from the object owner and is used as the third part of the four-part naming system for objects, which is server.database.schema.object." (Marilyn Miller-White et al, "MCITP Administrator: Microsoft® SQL Server™ 2005 Optimization and Maintenance 70-444", 2007)

"A group of database objects that make up a given namespace. Objects include tables, views, and statements that grant or revoke access to other securable objects. No two objects in any namespace can have the same name." (Robert D. Schneider and Darril Gibson, "Microsoft SQL Server 2008 All-In-One Desk Reference For Dummies", 2008)

"In SQL Server 2000, the schema identified the model of the database. For example, the tables, VIEWs, data types, and so on would be identified as the schema. In SQL Server 2005, the term schema is used to identify a collection of database entities within a single namespace. Schemas are the owners of the objects, and one or more users can be the owners of schemas." (Darril Gibson, "MCITP SQL Server 2005 Database Developer All-in-One Exam Guide", 2008)

"A collection of tables that forms a database." (Paulraj Ponniah, "Data Warehousing Fundamentals for IT Professionals", 2010)

"A conceptual data model that shows all the relationships among the data elements under consideration in a given context; the collection of table definitions in a relational database." (Toby J Teorey, "Database Modeling and Design" 4th Ed, 2010)

"In database design theory, the overall logical design of a database. In a SQL DBMS, a group of tables and supporting elements such as views and indexes." (Jan L Harrington, "SQL Clearly Explained 3rd Ed. ", 2010)

"(1) The logical or physical definition of data elements, physical characteristics, and relationships. (2) The diagram representing a logical data model." (Craig S Mullins, "Database Administration: The Complete Guide to DBA Practices and Procedures 2nd Ed", 2012)

"The names of tables, fields, data types, and primary and foreign keys of a database." (Microsoft, "SQL Server 2012 Glossary", 2012)

08 March 2007

Software Engineering: Reverse Engineering (Definitions)

"The process of analyzing existing software code and associated documentation to recover its architectural design and specification." (Ian Sommerville, "Software Engineering", 1996)

"The process of analyzing a subject system with two goals in mind: (1) to identify the system’s components and their interrelationships; and, (2) to create representations of the system in another form or at a higher level of abstraction." (Margaret Y Chu, "Blissful Data", 2004)

"Reverse engineering is the process of discovering the functions and their interrelationships of a software system as well as creating representations of the system in another form or at a higher level of abstraction." (Chia-Chu Chiang, "Software Modernization of Legacy Systems for Web Services Interoperability", 2008)

"the construction of a model from a set of source code files." (Bruce P Douglass, "Real-Time Agility: The Harmony/ESW Method for Real-Time and Embedded Systems Development", 2009)

"The process of transforming code into a model through a mapping from a specific implementation language." (Liliana M Favre et al, Foundations for MDA Case Tools", 2009)

[database reverse engineering:] "The process through which the logical and conceptual schemas of a legacy database, or of a set of files, are recovered, or rebuilt, from various information sources such as DDL code, data dictionary contents, database contents, or the source code of application programs that use the database." (Jean-Luc Hainaut et al, "Database Reverse Engineering", 2009)

"The process of transforming the physical schema of any particular database into a logical model." (Paulraj Ponniah, "Data Warehousing Fundamentals for IT Professionals", 2010)

"The process of deriving a draft physical model representing an implemented system (application and/or database) from automated scanning of the implemented application and database objects, as a first step towards redesign." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"The process of discovering the technological principles of a device, object or system through analysis of its structure, function and operation." (Tian Ge & Jianfeng Feng, "Granger Causality: Its Foundation and Applications in Systems Biology", 2011) 

"The process of taking a competitor's product apart and putting it back together again to better understand the manufacturing process and the product design." (Leslie G Eldenburg & Susan K Wolcott, "Cost Management" 2nd Ed., 2011)

"The process of analyzing and comprehending available software artifacts, such as requirements, design, architectures and code in order to extract information and provide high-level views of the system." (Liliana Favre et al, "Reverse Engineering of Object-Oriented Code: An ADM Approach" , 2015)

"It is a process in which a machine is completely dismantled in order to understand the intricacies of the machine. Finally, it is reassembled with added improvisation." Kuldeep K Saxena & Ankita Awasthi, "Novel Additive Manufacturing Processes and Techniques in Industry 4.0", 2020)

"The analysis of a subject system to identify its components and their interrelationships, and to create its representations in another form, or at higher level of abstraction." (Djelloul Bouchiha, "Reengineering Legacy Systems Towards New Technologies", 2021)

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.