Showing posts with label Legacy System. Show all posts
Showing posts with label Legacy System. Show all posts

03 February 2021

📦Data Migrations (DM): Conceptualization (Part IV: Data Access)

Data Migration
Data Migrations Series

Once the data sources for a Data Migration (DM) were identified the first question is how the data can be accessed. The legacy systems relying on ODBC-based databases are in theory relatively easy to access as long they allow the direct access to their data, which would enable thus a pull strategy. Despite this, there are organizations that don’t allow the direct access to the data even for read-only operations, being preferred to push the data directly to the consumers (aka push strategy) or push the data to a given location from where the consumer can use the data as needed (aka hybrid strategy). 

The direct access to the data allows in theory the best flexibility as the solution can extract the data when needed and this especially important during the initial phases of the project when the data need to be pulled more frequently until the requirements and logic is stabilized. A push strategy tends to add additional overhead as usually somebody else oversees the data exports, respectively the data need to be prepared in the expected format. On the other side, it would make sense to make an exception for a DM and allow the direct access to the data. 

 Hybrid strategies tend to be more complex and require additional resources or overhead as the data are stored temporarily at a separate location. Unfortunately, in certain scenarios this is the only approach can be used. Are preferred data files that allow keeping the integrity of the data and facilitate data consumption. Therefore, tabular text files or JSON files are preferred in the detriment of XML or Excel files. It’s preferable to export one data structure individually then storing parent-child solutions even if the latter can prove to be useful in certain scenarios. When there’s no other solution one can use also the standard reports available in the legacy systems.

When storing data outside the legacy systems for further processing it’s recommended to follow organization’s best practices, respectively to address the data security and privacy requirements. ETL tools allows accessing data from password protected areas like FTP, OneDrive or SharePoint. The fewer security layers in between the lower is in theory the overhead. Therefore, given its stability and simplicity FTP might prove to be a better storage solution than OneDrive, SharePoint or other similar technologies.

Ideally the extraction/export mechanisms should use the database objects that encapsulate already the logic in the legacy systems otherwise the team will need to reengineer the logic – for master data this can prove to be easy, though the logic of transactional data like on-hand or open invoices can be relatively complex to reengineer. Otherwise, the logic can be implemented directly in the extraction/export mechanisms or sometimes is more advisable to create database objects (usually on a different schema) on the legacy systems and just call the respective objects. 

When connecting directly to the data source it’s advisable using the data provider which allows the best performance and flexibility, however several tests might be needed to determine the best fit. It would be useful to check the limitations of each provider and find a stable driver version.  OLEDB and ADO.Net data providers provide in general a good performance, though native drivers of the legacy systems can be a better option upon case. 

Some legacy systems allow the access to their data only via service-based technologies like OData. OData tends to have poor performance for large data exports than standard access methods and therefore not indicated in such scenarios. In such cases might be a good idea to export the data directly from the legacy system. 

Previous Post <<||>> Next Post

13 January 2010

🗄️Data Management: Data Quality (Part II: An Introduction)

Data Management

Introduction

From time to time, I found myself in a meeting with people blaming the poor quality of data for the strange numbers appearing in a report, and often they were right. Sometimes you can see that without going too deep into details, other times you have to spend some considerable amount of time and effort and look over the raw data to identify the records that cause the unexpected variations. We know we have a problem with data quality, though why nobody is doing anything to improve it? Who’s responsible in the end for data quality? How data quality issues appear, how the data quality can be improved and by what means? When and where the data quality must be considered? Why do we have to consider data quality and in definitive what is data quality? Which are data quality’s characteristics or dimensions?

Without pretending to be an expert on data quality but using my and others’ experience and understanding in dealing with data quality, I will try to approach the above questions in several posts. And as usual, before answering more complex questions, inevitably we have to define first what data quality is.

What is Data Quality?

In several consulted sources, mainly books and blogs, data quality’s definition revolve around the 'fitness for use' syntagma (e.g. [1], [3], [4]). The definition makes data quality highly dependent on the context in which it’s considered because each request comes with different data quality expectations/requirements and data quality levels. Even more, the data could have acceptable quality in the legacy system(s), the system(s) used to store the data, though the data might not be appropriate for use in the current form. From this observation can be delimited two aspects of data quality – intrinsic data quality considering the creation and storage of data, and secondly contextual data quality referring to the degree the data meets users’ needs

Except these two aspects [2] considers two other aspects: the representation data quality referring to the "degree of care taken in the presentation and organization of information for users" and the accessibility data quality - "degree of freedom that users have to use data, define or refine the manner in which information is input, processed or presented to them" [2]. These four aspects provide a good framework for categorizing data quality and as can be seen, it targets the four coordinates of data – storage/creation, presentation, use and accessibility – however it ignores the fact data could be consumed also by machines, they needing to parse, understand and process the data. Therefore, could be added a fifth aspect – processing data quality, referring to the data readiness for consumption by machines, this aspect including the existence of metadata (e.g. data models, data mappings, etc.).


Why Data Quality?

Considering the growing dynamics and complexity of economic landscape, nowadays organization’s capacity of staying in business is highly dependent on decision making process’ accuracy, which at its turn is dependent on the reports’ quality used as support in decisions, and further on quality of data the reports are based on. If you have wrong data, you can’t expect the reports to show the correct numbers, same as you can’t expect a decision to be correct without having the actual situation correctly reflected even if occasionally the person taking the decision has a sound understanding of the business above the data level, or a good gut feeling. It is essential for an organization to have adequate data and reports that reflect the current state of art at any time and at the requested level of detail.

In this context is discussed about poor data quality, good data quality, high data quality, or even adequate/inadequate data quality. Paraphrasing data quality’s definition could be said that data are of high quality if they are 'fit for use' [1], and have poor data quality otherwise, though in life things are not black or white, but exist many other nuances in between, dependent on user’s perception and needs. For example, could be sufficient in one million records to have only one value that makes it impossible for a data set to be processed by a software package, and even if the data quality equates with a Six Sigma level, it’s not enough for this case. At the opposite side, a data set with a huge number of defects could be processed without problems and the outcome could be close to what it’s expected, now depends also on the nature of defects, which are the attributes affected and how they affect the result.

Poor data quality reduces considerably the value of data, impeding organizations to exploit the data at their full potential, a fact that limits their strategic competitive advantage. Poor data quality has a domino effect, often involving rework, delays in production, operations and decision making, a decrease in customers’ satisfaction reflected in revenues and lost opportunities, and so on. Unfortunately, the costs of poor data quality are difficult to quantify, they include the cost for finding and fixing the errors, the cost of wasted effort, lost production, and lost opportunity, plus the ones involved with wrong decision making. The same issue affects the costs involved in achieving high data quality levels, the easiest to quantify being the costs directly involved with the data quality initiatives. It's not in scope to detail these aspects, though is worth mentioning them to highlight why many organizations ignore the quality of their data.

How do Data Quality issues appear?

No system is bullet proof, no matter how good it was designed there will always be a chance for errors to appear, even if it’s the software itself, the people, the processes, standards, or procedures. A primary reason for data issues is caused by the weak validation implemented in the systems, either in the UI or in any other data entry points, by the errors that escaped the vigilant eyes of testers, by the (lack of) flexibility/robustness of applications, data models in general and data types in particular. On a secondary plane but not less important is the existence and adherence to processes, standards, or procedures, of whether they are known and respected by users, or whether they cover all the scenarios occurring in the daily business activity.

Who’s responsible for Data Quality?

As everybody in an organization is responsible for quality, the same applies also for data quality, from data entry to decision making everybody’s working with data directly or indirectly, and they are also theoretically responsible for it. From data life-cycle perspective there are the people who are entering the data in the various systems, the ones who are preparing the data for further usage, the ones using the data, and the ones managing the data quality initiatives (e.g. quality leaders, black belts, champions). The first category needs to take the ownership over the data, maintain them adequately, stick to the processes, standards and procedures defined, and together with the other users are responsible for identifying the issues in data, mitigate them and find solutions.

The data users are maybe the broadest category, ranging from simple employees using the data for daily activities to executives. Because often the data they are working with are aggregated at various levels, unless they have a good overview of the business or flexible reports that gives them access also to the row data it will be almost impossible for them to realize the actual data quality. Often the people preparing the data are the ones discovering the issues in data, though this happens by hazard, observing abnormalities in the data.

The people managing the data quality initiatives are hardly working with the data, though together with the other type of executives they are responsible for modeling an organization and creating a (data) quality culture.

How can be the Data Quality improved and by what means?

It takes time, resources, consolidated effort, adequate policies, and management’s involvement to reach an acceptable level of data quality. As already stressed data quality can be improved by creating robust processes, standards, or procedures, and sticking to them, creating a culture for quality in general and for data quality in particular, but rather they are the outcome of data quality initiatives.

The first step is recognizing that data quality in the organization as a problem and secondly identifying the nature of the issues eventually by conducting a data quality assessment, though this involve the definition of data quality (business) rules, rules against the quality of data will be checked, establishing an assessment, cleaning, and reporting framework for this purpose. Because improvement is the word of the day, could be conducted Six Sigma projects to identify issues’ root causes and the solutions to address them, monitor issues resolution and whether the found solutions fulfill the intended goals. The use of (Lean) Six Sigma methodology allows introducing quality concepts in the organization, making people aware of the premises and implications of data quality, and in addition it offers a receipt for success.

Depending on the volume of data and the number of issues found, data quality improvement might occur within several iterations, and when the volume of issues is more than organization can handle, then it makes sense to prioritize and fix the issues that impact the organization the most, following to address the remaining issues in time. There will also be situations in which is needed to redesign the existing processes, modify the legacy systems to limit the data entry issues or even start new projects in order fill the gaps.

When must be Data Quality considered?

Data quality comes in discussion especially when the data related issues impacts visibly the business, whether something is done to correct them or not that’s something dependent on whether an organization has in place a data quality vision, policy and adequate mechanisms for assessing and addressing the data quality on a regular basis, or whether the people or departments impacted by the issues take the problem in their hands and do something about it.

Another context in which data quality topic appears is during migration, conversion, and integration between systems, when the system(s) into which the data are supposed to be loaded, the destination system(s), come(s) with a set of rules/constraints requesting the data to be in a certain format/formatting, match specific data types and other data storage related constraints. Even if the data from the legacy system(s) are considered of high quality, in the mentioned activities the data quality is considered against the constraints imposed by the destination system.

What needs to be stressed is that data quality is not one time endeavor but a continuous and consolidated effort that could span along the life cycle of the systems storing the data. How often data quality assessments must be conducted? It depends on data growth and perceived quality, organizations’ data quality policy/vision and the degree they are respected, organization overall maturity and culture, availability of resources and the different usage of data.

Where is Data Quality considered?

Data quality is generally considered against the legacy systems the data following to be cleaned in them, though when data quality is judged against external requests, the data could be cleaned in Excel files or in any other environments designed for this kind of tasks, each scenario having its pluses and minuses, sometimes being preferred a hybrid solution between the two alternatives. Most of the time it makes sense to clean the data in the legacy system because in general it minimizes the overall effort, the data are “actual” and reflected in the other systems using the data. 

Excel or any other similar tools could prove to be better environments for cleaning the data for one time requests, when is needed to do fast mass updates or is not intended/possible to modify the legacy data, however the probability of making mistakes is quite high because the validation implemented in the legacy systems are not available anymore or must be enforced, multiple copies of the same data could exist in case the data needs to be clean by multiple users, while the synchronization between legacy data and cleaned data could be problematic.

There are situations in which the data are not available in any of the systems available in place and there is no adequate possibility to store the new data in them, thus being requested eventually to extend one or more systems (aka data enrichment), solution that could be quite expensive, an alternative being to store the data in ad-hoc solutions (e.g. Excel, MS Access), following to merge the existing data islands (aka data integration) before loading them into the destination system(s). When dealing with a relatively small number of records the data could be entered manually in the destination system itself, though there are all kind of constraints that need to be considered, therefore it is safer and more appropriate to load the data as contiguous sets.


Written: Dec-2009, Last Reviewed: Mar-2024

References:
[1] Joseph M Juran & A Blanton Godfrey (1999) Juran’s Quality Handbook, 5th Ed.
[2] Coral Calero (2008). Handbook of Research on Web Information Systems Quality
[3] US Census Bureau (2006) Definition of Data Quality: Census Bureau Principle, version 1.3. [Online] Available from: http://www.census.gov/quality/P01-0_v1.3_Definition_of_Quality.pdf (Accessed: 24 December 2009).
[4] OCDQ (Obsessive-Compulsive Data Quality) Blog (2009) The Data-Information Continuum. [Online] Available from: http://www.ocdqblog.com/home/the-data-information-continuum.html (Accessed: 24 December 2009)
Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.