27 October 2021

Data Science: Research (Just the Quotes)

"The aim of research is the discovery of the equations which subsist between the elements of phenomena." (Ernst Mach, 1898)

"[…] scientific research is somewhat like unraveling complicated tangles of strings, in which luck is almost as vital as skill and accurate observation." (Ernst Mach, "Knowledge and Error: Sketches on the Psychology of Enquiry", 1905)

"Research is fundamentally a state of mind involving continual re­examination of doctrines and axioms upon which current thought and action are based. It is, therefore, critical of existing practices." (Theobald Smith, "The Influence of Research in Bringing into Closer Relationship the Practice of Medicine and Public Health Activities", American Journal of Medical Sciences, 1929)

"In every important advance the physicist finds that the fundamental laws are simplified more and more as experimental research advances. He is astonished to notice how sublime order emerges from what appeared to be chaos. And this cannot be traced back to the workings of his own mind but is due to a quality that is inherent in the world of perception." (Albert Einstein, 1932)

"Statistics is a scientific discipline concerned with collection, analysis, and interpretation of data obtained from observation or experiment. The subject has a coherent structure based on the theory of Probability and includes many different procedures which contribute to research and development throughout the whole of Science and Technology." (Egon Pearson, 1936)

"A successful hypothesis is not necessarily a permanent hypothesis, but it is one which stimulates additional research, opens up new fields, or explains and coordinates previously unrelated facts." (Farrington Daniels, "Outlines of Physical Chemistry", 1948)

"The hypothesis is the principal intellectual instrument in research. Its function is to indicate new experiments and observations and it therefore sometimes leads to discoveries even when not correct itself. We must resist the temptation to become too attached to our hypothesis, and strive to judge it objectively and modify it or discard it as soon as contrary evidence is brought to light. Vigilance is needed to prevent our observations and interpretations being biased in favor of the hypothesis. Suppositions can be used without being believed." (William I B Beveridge, "The Art of Scientific Investigation", 1950)

"Mathematical models for empirical phenomena aid the development of a science when a sufficient body of quantitative information has been accumulated. This accumulation can be used to point the direction in which models should be constructed and to test the adequacy of such models in their interim states. Models, in turn, frequently are useful in organizing and interpreting experimental data and in suggesting new directions for experimental research." (Robert R. Bush & Frederick Mosteller, "A Mathematical Model for Simple Learning", Psychological Review 58, 1951)

"Statistics is the fundamental and most important part of inductive logic. It is both an art and a science, and it deals with the collection, the tabulation, the analysis and interpretation of quantitative and qualitative measurements. It is concerned with the classifying and determining of actual attributes as well as the making of estimates and the testing of various hypotheses by which probable, or expected, values are obtained. It is one of the means of carrying on scientific research in order to ascertain the laws of behavior of things - be they animate or inanimate. Statistics is the technique of the Scientific Method." (Bruce D Greenschields & Frank M Weida, "Statistics with Applications to Highway Traffic Analyses", 1952)

"In a general way it may be said that to think in terms of systems seems the most appropriate conceptual response so far available when the phenomena under study - at any level and in any domain--display the character of being organized, and when understanding the nature of the interdependencies constitutes the research task. In the behavioral sciences, the first steps in building a systems theory were taken in connection with the analysis of internal processes in organisms, or organizations, when the parts had to be related to the whole." (Fred Emery, "The Causal Texture of Organizational Environments", 1963)

"If the null hypothesis is not rejected, [Sir Ronald] Fisher's position was that nothing could be concluded. But researchers find it hard to go to all the trouble of conducting a study only to conclude that nothing can be concluded." (Frank L Schmidt, "Statistical Significance Testing and Cumulative Knowledge", "Psychology: Implications for Training of Researchers, Psychological Methods" Vol. 1 (2), 1996)

"Statisticians can calculate the probability that such random samples represent the population; this is usually expressed in terms of sampling error [...]. The real problem is that few samples are random. Even when researchers know the nature of the population, it can be time-consuming and expensive to draw a random sample; all too often, it is impossible to draw a true random sample because the population cannot be defined. This is particularly true for studies of social problems. [...] The best samples are those that come as close as possible to being random." (Joel Best, "Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists", 2001)

"Meta-analytic thinking is the consideration of any result in relation to previous results on the same or similar questions, and awareness that combination with future results is likely to be valuable. Meta-analytic thinking is the application of estimation thinking to more than a single study. It prompts us to seek meta-analysis of previous related studies at the planning stage of research, then to report our results in a way that makes it easy to include them in future meta-analyses. Meta-analytic thinking is a type of estimation thinking, because it, too, focuses on estimates and uncertainty." (Geoff Cumming, "Understanding the New Statistics", 2012)

"Statistical cognition is concerned with obtaining cognitive evidence about various statistical techniques and ways to present data. It’s certainly important to choose an appropriate statistical model, use the correct formulas, and carry out accurate calculations. It’s also important, however, to focus on understanding, and to consider statistics as communication between researchers and readers." (Geoff Cumming, "Understanding the New Statistics", 2012)

"Another way to secure statistical significance is to use the data to discover a theory. Statistical tests assume that the researcher starts with a theory, collects data to test the theory, and reports the results - whether statistically significant or not. Many people work in the other direction, scrutinizing the data until they find a pattern and then making up a theory that fits the pattern." (Gary Smith, "Standard Deviations", 2014)

"How can we tell the difference between a good theory and quackery? There are two effective antidotes: common sense and fresh data. If it is a ridiculous theory, we shouldn’t be persuaded by anything less than overwhelming evidence, and even then be skeptical. Extraordinary claims require extraordinary evidence. Unfortunately, common sense is an uncommon commodity these days, and many silly theories have been seriously promoted by honest researchers." (Gary Smith, "Standard Deviations", 2014)

"These practices - selective reporting and data pillaging - are known as data grubbing. The discovery of statistical significance by data grubbing shows little other than the researcher’s endurance. We cannot tell whether a data grubbing marathon demonstrates the validity of a useful theory or the perseverance of a determined researcher until independent tests confirm or refute the finding. But more often than not, the tests stop there. After all, you won’t become a star by confirming other people’s research, so why not spend your time discovering new theories? The data-grubbed theory consequently sits out there, untested and unchallenged." (Gary Smith, "Standard Deviations", 2014)

"A conceptual model is a framework that is initially used in research to outline the possible courses of action or to present an idea or thought. When a conceptual model is developed in a logical manner, it will provide a rigor to the research process." (N Elangovan & R Rajendran, "Conceptual Model: A Framework for Institutionalizing the Vigor in Business Research", 2015)

"Even properly done statistics can’t be trusted. The plethora of available statistical techniques and analyses grants researchers an enormous amount of freedom when analyzing their data, and it is trivially easy to ‘torture the data until it confesses’." (Alex Reinhart, "Statistics Done Wrong: The Woefully Complete Guide", 2015)

"The correlational technique known as multiple regression is used frequently in medical and social science research. This technique essentially correlates many independent (or predictor) variables simultaneously with a given dependent variable (outcome or output). It asks, 'Net of the effects of all the other variables, what is the effect of variable A on the dependent variable?' Despite its popularity, the technique is inherently weak and often yields misleading results. The problem is due to self-selection. If we don’t assign cases to a particular treatment, the cases may differ in any number of ways that could be causing them to differ along some dimension related to the dependent variable. We can know that the answer given by a multiple regression analysis is wrong because randomized control experiments, frequently referred to as the gold standard of research techniques, may give answers that are quite different from those obtained by multiple regression analysis." (Richard E Nisbett, "Mindware: Tools for Smart Thinking", 2015)

"Collecting data through sampling therefore becomes a never-ending battle to avoid sources of bias. [...] While trying to obtain a random sample, researchers sometimes make errors in judgment about whether every person or thing is equally likely to be sampled." (Daniel J Levitin, "Weaponized Lies", 2017)

"Samples give us estimates of something, and they will almost always deviate from the true number by some amount, large or small, and that is the margin of error. […] The margin of error does not address underlying flaws in the research, only the degree of error in the sampling procedure. But ignoring those deeper possible flaws for the moment, there is another measurement or statistic that accompanies any rigorously defined sample: the confidence interval." (Daniel J Levitin, "Weaponized Lies", 2017)

"The job of the statistician is to formulate an inventory of all those things that matter in order to obtain a representative sample. Researchers have to avoid the tendency to capture variables that are easy to identify or collect data on - sometimes the things that matter are not obvious or are difficult to measure." (Daniel J Levitin, "Weaponized Lies", 2017)

04 April 2021

Lean Management: Between Value and Waste I (Introduction)

 Mismanagement

Independently on whether Lean Management is considered in the context of Manufacturing, Software Development (SD), Project Management (PM) or any other business-related areas, there are three fundamental business concepts on which the whole scaffolding of the Lean philosophies is built upon, namely the ones of value, value stream and waste. 

From an economic standpoint, value refers to the monetary worth of a product, asset or service (further referred as product) to an organization, while from a qualitative perspective, it refers to the perceived benefit associated with its usage. The value is thus reflected in the costs associated with a product’s delivery (producer’s perspective), respectively the price paid on acquiring it and the degree to which the product can fulfill a demand (customer’s perspective).

Without diving too deep into theory of product valuation, the challenges revolve around reducing the costs associated with a product’s delivery, respectively selling it to a price the customer is willing to pay for, typically to address a given set of needs. Moreover, the customer is willing to pay only for the functions that satisfy the needs a product is thought to cover. From this friction of opposing driving forces, a product is designed and valued.

The value stream is the sequence of activities (also steps or processes) needed to deliver a product to customers. This formulation includes value-added and non-value-added activities, internal and external customers, respectively covers the full lifecycle of products and/or services in whatever form it occurs, either if is or not perceived by the customers.  

Waste is any activity that consumes resources but creates no value for the customers or, generally, for the stakeholders, be it internal or external. The waste is typically associated with the non-added value activities, activities that don’t produce value for stakeholders, and can increase directly or indirectly the costs of products especially when no attention is given to it and/or not recognized as such. Therefore, eliminating the waste can have an important impact on products’ costs and become one of the goals of Lean Management. Moreover, eliminating the waste is an incremental process that, when put in the context of continuous improvement, can lead to processes redesign and re-engineering.

Taiichi Ohno, the ‘father’ of the Toyota Production System (TPS), originally identified seven forms of waste (Japanese: muda): overproduction, waiting, transporting, inappropriate processing, unnecessary inventory, unnecessary/excess motion, and defects. Within the context of SD and PM, Tom and Marry Poppendieck [1] translated the types of wastes in concepts closer to the language of software developers: partially done work, extra processes, extra features, task switching, waiting, motion and, of course, defects. To this list were added later further types of waste associated with resources, confusion and work conditions.

Defects in form of errors and bugs, ineffective communication, rework and overwork, waiting, repetitive activities like handoffs or even unnecessary meetings are usually the visible part of products and projects and important from the perspective of stakeholders, which in extremis can become sensitive when their volume increases out of proportion.

Unfortunately, lurking in the deep waters of projects and wrecking everything that stands in their way are the other forms of waste less perceivable from stakeholders’ side: unclear requirements/goals, code not released or not tested, specifications not implemented, scrapped code, overutilized/underutilized resources, bureaucracy, suboptimal processes, unnecessary optimization, searching for information, mismanagement, task switching, improper work condition, confusion, to mention just the important activities associated to waste.

Through their elusive nature, independently on whether they are or not visible to stakeholders, they all impact the costs of projects and products when the proper attention is not given to them and not handled accordingly.

Lean Management - The Waste Iceberg

References:
[1] Mary Poppendieck & Tom Poppendieck (2003) Lean Software Development: An Agile Toolkit, Addison Wesley, ISBN: 0-321-15078-3

01 April 2021

SQL Reloaded: Processing JSON Files with Complex Structure in SQL Server 2016+

Unfortunately (or fortunately, for the challenge-searchers), not all JSON data files have a simple (matrix) structure, while the data might not even have a proper (readable) definition. It's the case of the unemployment data provided by the Cologne municipality (source). However with a language page translator and some small effort one can identify the proximate data definition:

Source FieldTarget FieldData Type
AM_ALO_INSG_AAALO_Totalint
AM_ALO_SGB2_AAALO_SGB2int
AM_ALO_UNTER25_AAALO_Under25int
AM_ALO_INSG_APALO_Total_Percfloat
AM_ALO_SGB2_APALO_SGB2_Percfloat
AM_ALO_UNTER25_APALO_Under25_Percfloat
AM_ALO_INSG_HAALO_Total_Histint
AM_ALO_SGB2_HAALO_SGB2_Histint
AM_ALO_UNTER25_HAALO_Under25_Histint
AM_ALO_INSG_HPALO_Total_HistPercfloat
AM_ALO_SGB2_HPALO_SGB2_HistPercfloat
AM_ALO_UNTER25_HPALO_Under25_HistPercfloat
AM_SVB_INSG_AASVB_Totalint
AM_SVB_MANN_AASVB_Menint
AM_SVB_FRAU_AASVB_Womenint
AM_SVB_DEUTSCH_AASVB_Germanint
AM_SVB_AUSLAND_AASVB_AUSLANDint
AM_SVB_U25J_AASVB_Under25Yoint
AM_SVB_UEBER55J_AASVB_Over55Yoint
AM_SVB_INSG_APSVB_Total_Percfloat
AM_SVB_MANN_APSVB_Men_Percfloat
AM_SVB_FRAU_APSVB_Women_Percfloat
AM_SVB_DEUTSCH_APSVB_German_Percfloat
AM_SVB_AUSLAND_APSVB_AUSLAND_Percfloat
AM_SVB_U25J_APSVB_Under25Yo_Percfloat
AM_SVB_UEBER55J_APSVB_Over55Yo_Percfloat
AM_SVB_INSG_HASVB_Total_Histint
AM_SVB_MANN_HASVB_Men_Histint
AM_SVB_FRAU_HASVB_Women_Histint
AM_SVB_DEUTSCH_HASVB_German_Histint
AM_SVB_AUSLAND_HASVB_AUSLAND_Histint
AM_SVB_U25J_HASVB_Under25Yo_Histint
AM_SVB_UEBER55J_HASVB_Over55Yo_Histint
AM_SVB_INSG_HPSVB_Total_HistPercfloat
AM_SVB_MANN_HPSVB_Men_HistPercfloat
AM_SVB_FRAU_HPSVB_Women_HistPercfloat
AM_SVB_DEUTSCH_HPSVB_German_HistPercfloat
AM_SVB_AUSLAND_HPSVB_AUSLAND_HistPercfloat
AM_SVB_U25J_HPSVB_Under25Yo_HistPercfloat
AM_SVB_UEBER55J_HPSVB_Over55Yo_HistPercfloat
SHAPE.AREASHAPE.AREAint
SHAPE.LENSHAPE.LENint

As previously stated (see post), it makes sense to build the logic over several iterations, making first sure that the references to file's columns were used correctly (observe the way the various elements were referenced in the queries):

SELECT DAT.ObjectId 
, DAT.Nummer
, DAT.Name
, DAT.ALO_Total
, DAT.ALO_SGB2
FROM OPENROWSET (BULK 'D:\data\Arbeitsmarkt Statistik Koeln Stadtteil.json',CODEPAGE='65001', SINGLE_CLOB)  as jsonfile 
     CROSS APPLY OPENJSON(BulkColumn,'$.features')
 WITH( 
	  ObjectId int '$.properties.OBJECTID'
	, Nummer int '$.properties.NUMMER'
	, Name nvarchar(max) '$.properties.NAME'
	, ALO_Total int '$.properties.AM_ALO_INSG_AA'
	, ALO_SGB2 int '$.properties.AM_ALO_SGB2_AA'
) AS DAT; 

Output (first 13 records):
ObjectIdNummerNameALO_TotalALO_SGB2
1211Godorf12192
2308Lövenich15683
3307Weiden552383
4306Junkersdorf305164
5309Widdersdorf20093
6404Vogelsang310208
7505Weidenpesch527351
8502Mauenheim190127
12207Hahnwald153
13213Meschenich644554

The logic seems to work, however the German umlauts aren't displayed as expected ('Lövenich', when it should have been 'Lövenich'). This is caused by the differences in character sets. An easy way to address this is to use a functions which does the conversion (see the dbo.ReplaceCodes2Umlauts UDF from an older post). 

By applying the function on the Names, adding the further columns and an INSERT clause, the query becomes:
 
-- importing the JSON file
SELECT DAT.ObjectId 
, DAT.Nummer
, dbo.ReplaceCodes2Umlauts(DAT.Name) Name
, DAT.ALO_Total
, DAT.ALO_SGB2
, DAT.ALO_Under25
, DAT.ALO_Total_Perc
, DAT.ALO_SGB2_Perc
, DAT.ALO_Under25_Perc
, DAT.ALO_Total_Hist
, DAT.ALO_SGB2_Hist
, DAT.ALO_Under25_Hist
, DAT.ALO_Total_HistPerc
, DAT.ALO_SGB2_HistPerc
, DAT.ALO_Under25_HistPerc
, DAT.SVB_Total
, DAT.SVB_Men
, DAT.SVB_Women
, DAT.SVB_German
, DAT.SVB_AUSLAND
, DAT.SVB_Under25Yo
, DAT.SVB_Over55Yo
, DAT.SVB_Total_Perc
, DAT.SVB_Men_Perc
, DAT.SVB_Women_Perc
, DAT.SVB_German_Perc
, DAT.SVB_AUSLAND_Perc
, DAT.SVB_Under25Yo_Perc
, DAT.SVB_Over55Yo_Perc
, DAT.SVB_Total_Hist
, DAT.SVB_Men_Hist
, DAT.SVB_Women_Hist
, DAT.SVB_German_Hist
, DAT.SVB_AUSLAND_Hist
, DAT.SVB_Under25Yo_Hist
, DAT.SVB_Over55Yo_Hist
, DAT.SVB_Total_HistPerc
, DAT.SVB_Men_HistPerc
, DAT.SVB_Women_HistPerc
, DAT.SVB_German_HistPerc
, DAT.SVB_AUSLAND_HistPerc
, DAT.SVB_Under25Yo_HistPerc
, DAT.SVB_Over55Yo_HistPerc
, DAT.Shape_Area 
, DAT.Shape_Len 
INTO dbo.Unemployment_Cologne
FROM OPENROWSET (BULK 'D:\data\Arbeitsmarkt Statistik Koeln Stadtteil.json',CODEPAGE='65001', SINGLE_CLOB)  as jsonfile 
     CROSS APPLY OPENJSON(BulkColumn,'$.features')
 WITH( 
	  ObjectId int '$.properties.OBJECTID'
	, Nummer int '$.properties.NUMMER'
	, Name nvarchar(max) '$.properties.NAME'
	, ALO_Total int '$.properties.AM_ALO_INSG_AA'
	, ALO_SGB2 int '$.properties.AM_ALO_SGB2_AA'
	, ALO_Under25 int '$.properties.AM_ALO_UNTER25_AA'
	, ALO_Total_Perc float '$.properties.AM_ALO_INSG_AP'
	, ALO_SGB2_Perc float '$.properties.AM_ALO_SGB2_AP'
	, ALO_Under25_Perc float '$.properties.AM_ALO_UNTER25_AP'
	, ALO_Total_Hist int '$.properties.AM_ALO_INSG_HA'
	, ALO_SGB2_Hist int '$.properties.AM_ALO_SGB2_HA'
	, ALO_Under25_Hist int '$.properties.AM_ALO_UNTER25_HA'
	, ALO_Total_HistPerc float '$.properties.AM_ALO_INSG_HP'
	, ALO_SGB2_HistPerc float '$.properties.AM_ALO_SGB2_HP'
	, ALO_Under25_HistPerc float '$.properties.AM_ALO_UNTER25_HP'
	, SVB_Total int '$.properties.AM_SVB_INSG_AA'
	, SVB_Men int '$.properties.AM_SVB_MANN_AA'
	, SVB_Women int '$.properties.AM_SVB_FRAU_AA'
	, SVB_German int '$.properties.AM_SVB_DEUTSCH_AA'
	, SVB_AUSLAND int '$.properties.AM_SVB_AUSLAND_AA'
	, SVB_Under25Yo int '$.properties.AM_SVB_U25J_AA'
	, SVB_Over55Yo int '$.properties.AM_SVB_UEBER55J_AA'
	, SVB_Total_Perc float '$.properties.AM_SVB_INSG_AP'
	, SVB_Men_Perc float '$.properties.AM_SVB_MANN_AP'
	, SVB_Women_Perc float '$.properties.AM_SVB_FRAU_AP'
	, SVB_German_Perc float '$.properties.AM_SVB_DEUTSCH_AP'
	, SVB_AUSLAND_Perc float '$.properties.AM_SVB_AUSLAND_AP'
	, SVB_Under25Yo_Perc float '$.properties.AM_SVB_U25J_AP'
	, SVB_Over55Yo_Perc float '$.properties.AM_SVB_UEBER55J_AP'
	, SVB_Total_Hist int '$.properties.AM_SVB_INSG_HA'
	, SVB_Men_Hist int '$.properties.AM_SVB_MANN_HA'
	, SVB_Women_Hist int '$.properties.AM_SVB_FRAU_HA'
	, SVB_German_Hist int '$.properties.AM_SVB_DEUTSCH_HA'
	, SVB_AUSLAND_Hist int '$.properties.AM_SVB_AUSLAND_HA'
	, SVB_Under25Yo_Hist int '$.properties.AM_SVB_U25J_HA'
	, SVB_Over55Yo_Hist int '$.properties.AM_SVB_UEBER55J_HA'
	, SVB_Total_HistPerc float '$.properties.AM_SVB_INSG_HP'
	, SVB_Men_HistPerc float '$.properties.AM_SVB_MANN_HP'
	, SVB_Women_HistPerc float '$.properties.AM_SVB_FRAU_HP'
	, SVB_German_HistPerc float '$.properties.AM_SVB_DEUTSCH_HP'
	, SVB_AUSLAND_HistPerc float '$.properties.AM_SVB_AUSLAND_HP'
	, SVB_Under25Yo_HistPerc float '$.properties.AM_SVB_U25J_HP'
	, SVB_Over55Yo_HistPerc float '$.properties.AM_SVB_UEBER55J_HP'
	, Shape_Area float '$.properties."SHAPE.AREA"'
	, Shape_Len float '$.properties."SHAPE.LEN"'
) AS DAT; 

Once the data made available, one can go on and discover the data and the relationships existing between the various columns. 

Happy coding!

SQL Reloaded: Processing JSON Files with Flat Matrix Structure in SQL Server 2016+

Besides the CSV format, many of the data files made available under the open data initiatives are stored in JSON format, which makes data more difficult to process, even if JSON offers a richer structure that goes beyond the tabular structure of CSV files. Fortunately, starting with SQL Server 2016, JSON became a native format, which makes the processing of JSON files relatively easy, the easiness with which one can process the data depending on how they are structured.

Let’s consider as example a JSON file with the world population per country and year that can be downloaded from DataHub (source). The structure behind resembles a tabular model (see the table on the source website), having a flat structure. Just export the data to a file with the JSON extension (e.g. ‘population-figures-by-country.json’) locally (e.g. ‘D:/Data’). The next step is to understand file’s structure. Some repositories provide good documentation in this respect, though there are also many exceptions. Having a JSON editor like Visual Studio which reveals the structure makes easier the process. 

As in the case of CSV files, is needed to infer the data types. There are two alphanumeric fields (Country & Country Code), while the remaining fields are numeric. The only challenge raised by the data seems to be the difference in format between the years 2002 and 2015 in respect to the other years, as the values of the former contain a decimal after comma. All the numeric values should have been whole values. 

It’s recommended to start small and build the logic iteratively. Therefore, for the first step just look at files content via the OPENROWSET function:

-- looking at the JSON file 
SELECT *
FROM OPENROWSET (BULK 'D:\data\population-figures-by-country.json', SINGLE_CLOB)  as jsonfile 

In a second step one can add the OPENJSON function by looking only at the first record: 

-- querying a json file (one record)
SELECT *
FROM OPENROWSET (BULK 'D:\data\population-figures-by-country.json', SINGLE_CLOB)  as jsonfile 
     CROSS APPLY OPENJSON(BulkColumn,'$[0]')

In a third step one can add a few columns (e.g. Country & Country Code) to make sure that the select statement works correctly. 

-- querying a json file (all records, a few fields)
SELECT Country 
, CountryCode 
FROM OPENROWSET (BULK 'D:\data\population-figures-by-country.json', SINGLE_CLOB)  as jsonfile 
     CROSS APPLY OPENJSON(BulkColumn,'$')
 WITH ( 
  Country nvarchar(max) '$.Country'
, CountryCode nvarchar(3) '$.Country_Code'
) AS DAT; 

In a next step can be added all the columns and import the data in a table (e.g. dbo.CountryPopulation) on the fly: 

-- importing a json file (all records) on the fly
SELECT DAT.Country
, DAT.CountryCode
, DAT.Y1960
, DAT.Y1961
, DAT.Y1962
, DAT.Y1963
, DAT.Y1964
, DAT.Y1965
, DAT.Y1966
, DAT.Y1967
, DAT.Y1968
, DAT.Y1969
, DAT.Y1970
, DAT.Y1971
, DAT.Y1972
, DAT.Y1973
, DAT.Y1974
, DAT.Y1975
, DAT.Y1976
, DAT.Y1977
, DAT.Y1978
, DAT.Y1979
, DAT.Y1980
, DAT.Y1981
, DAT.Y1982
, DAT.Y1983
, DAT.Y1984
, DAT.Y1985
, DAT.Y1986
, DAT.Y1987
, DAT.Y1988
, DAT.Y1989
, DAT.Y1990
, DAT.Y1991
, DAT.Y1992
, DAT.Y1993
, DAT.Y1994
, DAT.Y1995
, DAT.Y1996
, DAT.Y1997
, DAT.Y1998
, DAT.Y1999
, DAT.Y2000
, DAT.Y2001
, Cast(DAT.Y2002 as bigint) Y2002
, Cast(DAT.Y2003 as bigint) Y2003
, Cast(DAT.Y2004 as bigint) Y2004
, Cast(DAT.Y2005 as bigint) Y2005
, Cast(DAT.Y2006 as bigint) Y2006
, Cast(DAT.Y2007 as bigint) Y2007
, Cast(DAT.Y2008 as bigint) Y2008
, Cast(DAT.Y2009 as bigint) Y2009
, Cast(DAT.Y2010 as bigint) Y2010
, Cast(DAT.Y2011 as bigint) Y2011
, Cast(DAT.Y2012 as bigint) Y2012
, Cast(DAT.Y2013 as bigint) Y2013
, Cast(DAT.Y2014 as bigint) Y2014
, Cast(DAT.Y2015 as bigint) Y2015
, DAT.Y2016
INTO dbo.CountryPopulation
FROM OPENROWSET (BULK 'D:\data\population-figures-by-country.json', SINGLE_CLOB)  as jsonfile 
     CROSS APPLY OPENJSON(BulkColumn,'$')
 WITH ( 
  Country nvarchar(max) '$.Country'
, CountryCode nvarchar(3) '$.Country_Code'
, Y1960 bigint '$.Year_1960'
, Y1961 bigint '$.Year_1961'
, Y1962 bigint '$.Year_1962'
, Y1963 bigint '$.Year_1963'
, Y1964 bigint '$.Year_1964'
, Y1965 bigint '$.Year_1965'
, Y1966 bigint '$.Year_1966'
, Y1967 bigint '$.Year_1967'
, Y1968 bigint '$.Year_1968'
, Y1969 bigint '$.Year_1969'
, Y1970 bigint '$.Year_1970'
, Y1971 bigint '$.Year_1971'
, Y1972 bigint '$.Year_1972'
, Y1973 bigint '$.Year_1973'
, Y1974 bigint '$.Year_1974'
, Y1975 bigint '$.Year_1975'
, Y1976 bigint '$.Year_1976'
, Y1977 bigint '$.Year_1977'
, Y1978 bigint '$.Year_1978'
, Y1979 bigint '$.Year_1979'
, Y1980 bigint '$.Year_1980'
, Y1981 bigint '$.Year_1981'
, Y1982 bigint '$.Year_1982'
, Y1983 bigint '$.Year_1983'
, Y1984 bigint '$.Year_1984'
, Y1985 bigint '$.Year_1985'
, Y1986 bigint '$.Year_1986'
, Y1987 bigint '$.Year_1987'
, Y1988 bigint '$.Year_1988'
, Y1989 bigint '$.Year_1989'
, Y1990 bigint '$.Year_1990'
, Y1991 bigint '$.Year_1991'
, Y1992 bigint '$.Year_1992'
, Y1993 bigint '$.Year_1993'
, Y1994 bigint '$.Year_1994'
, Y1995 bigint '$.Year_1995'
, Y1996 bigint '$.Year_1996'
, Y1997 bigint '$.Year_1997'
, Y1998 bigint '$.Year_1998'
, Y1999 bigint '$.Year_1999'
, Y2000 bigint '$.Year_2000'
, Y2001 bigint '$.Year_2001'
, Y2002 decimal(19,1) '$.Year_2002'
, Y2003 decimal(19,1) '$.Year_2003'
, Y2004 decimal(19,1) '$.Year_2004'
, Y2005 decimal(19,1) '$.Year_2005'
, Y2006 decimal(19,1) '$.Year_2006'
, Y2007 decimal(19,1) '$.Year_2007'
, Y2008 decimal(19,1) '$.Year_2008'
, Y2009 decimal(19,1) '$.Year_2009'
, Y2010 decimal(19,1) '$.Year_2010'
, Y2011 decimal(19,1) '$.Year_2011'
, Y2012 decimal(19,1) '$.Year_2012'
, Y2013 decimal(19,1) '$.Year_2013'
, Y2014 decimal(19,1) '$.Year_2014'
, Y2015 decimal(19,1) '$.Year_2015'
, Y2016 bigint '$.Year_2016'
) AS DAT; 

As can be seen the decimal values were converted to bigint to preserve the same definition. Moreover, this enables data processing later, as no additional (implicit) conversions are necessary. 

Also, the columns’ names were changed either for simplification/convenience or simply taste. 

Writing such a monster query can be time-consuming, though preparing the metadata into Excel can decrease considerably the effort. With copy-past and a few tricks (e.g. replacing values, splitting columns based on a delimiter) one can easily prepare such a structure:

Source fieldTarget fieldDataTypeValueImport ClauseSelect Clause
CountryCountrynvarchar(max) emen Rep., Country nvarchar(max) '$.Country', DAT.Country
Country_CodeCountryCodenvarchar(3) YEM, CountryCode nvarchar(3) '$.Country_Code', DAT.CountryCode
Year_1960Y1960bigint5172135, Y1960 bigint '$.Year_1960', DAT.Y1960
Year_1961Y1961bigint5260501, Y1961 bigint '$.Year_1961', DAT.Y1961
Year_1962Y1962bigint5351799, Y1962 bigint '$.Year_1962', DAT.Y1962
Year_1963Y1963bigint5446063, Y1963 bigint '$.Year_1963', DAT.Y1963
Year_1964Y1964bigint5543339, Y1964 bigint '$.Year_1964', DAT.Y1964
Year_1965Y1965bigint5643643, Y1965 bigint '$.Year_1965', DAT.Y1965
Year_1966Y1966bigint5748588, Y1966 bigint '$.Year_1966', DAT.Y1966
Year_1967Y1967bigint5858638, Y1967 bigint '$.Year_1967', DAT.Y1967
Year_1968Y1968bigint5971407, Y1968 bigint '$.Year_1968', DAT.Y1968
Year_1969Y1969bigint6083619, Y1969 bigint '$.Year_1969', DAT.Y1969
Year_1970Y1970bigint6193810, Y1970 bigint '$.Year_1970', DAT.Y1970
Year_1971Y1971bigint6300554, Y1971 bigint '$.Year_1971', DAT.Y1971
Year_1972Y1972bigint6407295, Y1972 bigint '$.Year_1972', DAT.Y1972
Year_1973Y1973bigint6523452, Y1973 bigint '$.Year_1973', DAT.Y1973
Year_1974Y1974bigint6661566, Y1974 bigint '$.Year_1974', DAT.Y1974
Year_1975Y1975bigint6830692, Y1975 bigint '$.Year_1975', DAT.Y1975
Year_1976Y1976bigint7034868, Y1976 bigint '$.Year_1976', DAT.Y1976
Year_1977Y1977bigint7271872, Y1977 bigint '$.Year_1977', DAT.Y1977
Year_1978Y1978bigint7536764, Y1978 bigint '$.Year_1978', DAT.Y1978
Year_1979Y1979bigint7821552, Y1979 bigint '$.Year_1979', DAT.Y1979
Year_1980Y1980bigint8120497, Y1980 bigint '$.Year_1980', DAT.Y1980
Year_1981Y1981bigint8434017, Y1981 bigint '$.Year_1981', DAT.Y1981
Year_1982Y1982bigint8764621, Y1982 bigint '$.Year_1982', DAT.Y1982
Year_1983Y1983bigint9111097, Y1983 bigint '$.Year_1983', DAT.Y1983
Year_1984Y1984bigint9472170, Y1984 bigint '$.Year_1984', DAT.Y1984
Year_1985Y1985bigint9847899, Y1985 bigint '$.Year_1985', DAT.Y1985
Year_1986Y1986bigint10232733, Y1986 bigint '$.Year_1986', DAT.Y1986
Year_1987Y1987bigint10628585, Y1987 bigint '$.Year_1987', DAT.Y1987
Year_1988Y1988bigint11051504, Y1988 bigint '$.Year_1988', DAT.Y1988
Year_1989Y1989bigint11523267, Y1989 bigint '$.Year_1989', DAT.Y1989
Year_1990Y1990bigint12057039, Y1990 bigint '$.Year_1990', DAT.Y1990
Year_1991Y1991bigint12661614, Y1991 bigint '$.Year_1991', DAT.Y1991
Year_1992Y1992bigint13325583, Y1992 bigint '$.Year_1992', DAT.Y1992
Year_1993Y1993bigint14017239, Y1993 bigint '$.Year_1993', DAT.Y1993
Year_1994Y1994bigint14692686, Y1994 bigint '$.Year_1994', DAT.Y1994
Year_1995Y1995bigint15320653, Y1995 bigint '$.Year_1995', DAT.Y1995
Year_1996Y1996bigint15889449, Y1996 bigint '$.Year_1996', DAT.Y1996
Year_1997Y1997bigint16408954, Y1997 bigint '$.Year_1997', DAT.Y1997
Year_1998Y1998bigint16896210, Y1998 bigint '$.Year_1998', DAT.Y1998
Year_1999Y1999bigint17378098, Y1999 bigint '$.Year_1999', DAT.Y1999
Year_2000Y2000bigint17874725, Y2000 bigint '$.Year_2000', DAT.Y2000
Year_2001Y2001bigint18390135, Y2001 bigint '$.Year_2001', DAT.Y2001
Year_2002Y2002decimal(19,1) 18919179.0, Y2002 decimal(19,1) '$.Year_2002', Cast(DAT.Y2002 as bigint) Y2002
Year_2003Y2003decimal(19,1) 19462086.0, Y2003 decimal(19,1) '$.Year_2003', Cast(DAT.Y2003 as bigint) Y2003
Year_2004Y2004decimal(19,1) 20017068.0, Y2004 decimal(19,1) '$.Year_2004', Cast(DAT.Y2004 as bigint) Y2004
Year_2005Y2005decimal(19,1) 20582927.0, Y2005 decimal(19,1) '$.Year_2005', Cast(DAT.Y2005 as bigint) Y2005
Year_2006Y2006decimal(19,1) 21160534.0, Y2006 decimal(19,1) '$.Year_2006', Cast(DAT.Y2006 as bigint) Y2006
Year_2007Y2007decimal(19,1) 21751605.0, Y2007 decimal(19,1) '$.Year_2007', Cast(DAT.Y2007 as bigint) Y2007
Year_2008Y2008decimal(19,1) 22356391.0, Y2008 decimal(19,1) '$.Year_2008', Cast(DAT.Y2008 as bigint) Y2008
Year_2009Y2009decimal(19,1) 22974929.0, Y2009 decimal(19,1) '$.Year_2009', Cast(DAT.Y2009 as bigint) Y2009
Year_2010Y2010decimal(19,1) 23606779.0, Y2010 decimal(19,1) '$.Year_2010', Cast(DAT.Y2010 as bigint) Y2010
Year_2011Y2011decimal(19,1) 24252206.0, Y2011 decimal(19,1) '$.Year_2011', Cast(DAT.Y2011 as bigint) Y2011
Year_2012Y2012decimal(19,1) 24909969.0, Y2012 decimal(19,1) '$.Year_2012', Cast(DAT.Y2012 as bigint) Y2012
Year_2013Y2013decimal(19,1) 25576322.0, Y2013 decimal(19,1) '$.Year_2013', Cast(DAT.Y2013 as bigint) Y2013
Year_2014Y2014decimal(19,1) 26246327.0, Y2014 decimal(19,1) '$.Year_2014', Cast(DAT.Y2014 as bigint) Y2014
Year_2015Y2015decimal(19,1) 26916207.0, Y2015 decimal(19,1) '$.Year_2015', Cast(DAT.Y2015 as bigint) Y2015
Year_2016Y2016bigint27584213, Y2016 bigint '$.Year_2016', DAT.Y2016

Based on this structure, one can add two further formulas in Excel to prepare the statements as above and then copy the fields (last two columns were generated using the below formulas): 

=", " & TRIM(B2) & " " & C2 & " '$." & TRIM(A2) & "'" 
=", DAT." & TRIM(B2)

Consuming data in which the values are stored in a matrix structure can involve further challenges sometimes, even if this type of storage tends to save space. For example, adding the values for a new year would involve extending the table with one more column, while performing calculations between years would involve referencing each column in formulas. Therefore, transforming the data from a matrix to a normalized structure can have some benefit. This can be achieved by writing a query via the UNPIVOT operator:

-- unpivoting the data 
SELECT RES.Country
, RES.CountryCode
, Cast(Replace(RES.[Year], 'Y', '') as int) [Year]
, RES.Population
--INTO dbo.CountryPopulationPerYear
FROM 
( -- basis data
	SELECT Country
	, CountryCode
	, Y1960, Y1961, Y1962, Y1963, Y1964, Y1965, Y1966, Y1967, Y1968, Y1969
	, Y1970, Y1971, Y1972, Y1973, Y1974, Y1975, Y1976, Y1977, Y1978, Y1979
	, Y1980, Y1981, Y1982, Y1983, Y1984, Y1985, Y1986, Y1987, Y1988, Y1989
	, Y1990, Y1991, Y1992, Y1993, Y1994, Y1995, Y1996, Y1997, Y1998, Y1999
	, Y2000, Y2001, Y2002, Y2003, Y2004, Y2005, Y2006, Y2007, Y2008, Y2009
	, Y2010, Y2011, Y2012, Y2013, Y2014, Y2015, Y2016
	FROM dbo.CountryPopulation
) DAT
UNPIVOT  -- unpivot logic
   (Population FOR [Year] IN  (Y1960, Y1961, Y1962, Y1963, Y1964, Y1965, Y1966, Y1967, Y1968, Y1969
, Y1970, Y1971, Y1972, Y1973, Y1974, Y1975, Y1976, Y1977, Y1978, Y1979
, Y1980, Y1981, Y1982, Y1983, Y1984, Y1985, Y1986, Y1987, Y1988, Y1989
, Y1990, Y1991, Y1992, Y1993, Y1994, Y1995, Y1996, Y1997, Y1998, Y1999
, Y2000, Y2001, Y2002, Y2003, Y2004, Y2005, Y2006, Y2007, Y2008, Y2009
, Y2010, Y2011, Y2012, Y2013, Y2014, Y2015, Y2016)
) RES

Also this can be performed in two steps, first preparing the query, and in a final step inserting the data into a table (e.g. dbo.CountryPopulationPerYear) on the fly (re-execute the previous query after uncommenting the INSERT clause to generate the table). 

--reviewing the data 
SELECT Country
, CountryCode
, AVG(Population) AveragePopulation
, Max(Population) - Min(Population) RangePopulation
FROM dbo.CountryPopulationPerYear
WHERE [Year] BETWEEN 2010 AND 2019
GROUP BY Country
, CountryCode
ORDER BY Country

On the other side making comparisons between consecutive years is easier when using a matrix structure: 

--reviewing the data 
SELECT Country
, CountryCode
, Y2016
, Y2010
, Y2010-Y2010 [2016-2010]
, Y2011-Y2010 [2011-2010]
, Y2012-Y2011 [2011-2011]
, Y2013-Y2012 [2011-2012]
, Y2014-Y2013 [2011-2013]
, Y2015-Y2014 [2011-2014]
, Y2016-Y2015 [2011-2015]
FROM dbo.CountryPopulation
ORDER BY Country

Unless the storage space is a problem, in theory one can store the data in both formats as there can be requests which can benefit from one structure or the other. 

Happy coding!
Related Posts Plugin for WordPress, Blogger...