Showing posts with label Math. Show all posts
Showing posts with label Math. Show all posts

03 March 2025

💎🤖SQL Reloaded: Copilot Stories (Part VIII: Recursive Functions)

In theory, recursive functions prove useful in generating a list of numbers based on a formula in which the final result is based on the result of the previous step. In other words, a problem is solved by calculating the previous iteration of the same problem. In practice, they are limited by the number of iterations allowed, and there is a hard limit in SQL Server, the available resources (usually memory), and probably a few other factors. Of course, some algorithms can be implemented to overcome the limitations, while other times there are quite simple linear implementations based on a loop, like in the below examples. Therefore, recursive functions are more of a theoretical concept.   

Disclaimer: The below code was generated by Copilot via prompts and is used for demonstration purposes!

This post evaluates whether Microsoft 365 and Edge Copilot can be used to generate the code for some recursive functions. 

Case 1: Factorials

Let's start with the factorial that calculates the product of the n consecutive numbers.

Prompt:
Generate a recursive function in SQL Server to calculate the factorial of a number.

Output (M365 Copilot):

-- factorial function in Microsoft 365 Copilot
CREATE FUNCTION dbo.Factorial (@n INT)
RETURNS BIGINT
AS
BEGIN
    IF @n <= 1
        RETURN 1
    ELSE
        RETURN @n * dbo.Factorial(@n - 1)
END

Output (Edge Copilot):

-- factorial function in Edge Copilot
CREATE FUNCTION dbo.Factorial (@n INT)
RETURNS BIGINT
AS
BEGIN
    IF @n = 0 OR @n = 1
        RETURN 1
    ELSE
        RETURN @n * dbo.Factorial(@n - 1)
END;

The structure seems correct, and the returning value was identified correctly as bigint, which allows the maximum value in working with integer values. Unfortunately, both functions lead to the same error in a SQL database:

"Msg 455, Level 16, State 2, Line 9, The last statement included within a function must be a return statement."

The formulas ca be consolidated in a single statement as follows:

-- factorial function adapted (minimal)
CREATE FUNCTION dbo.Factorial (@n INT)
RETURNS BIGINT
AS
BEGIN
    RETURN 
        CASE 
            WHEN @n = 0 OR @n = 1 THEN 1
        ELSE
            @n * dbo.Factorial(@n - 1) END
END;

Especially when formulas become complex, it might be a good idea to store the result into a variable and use it in the RETURN statement.

-- factorial function adapted
CREATE FUNCTION dbo.Factorial (@n INT)
RETURNS BIGINT
AS
BEGIN
    DECLARE @retval as bigint;

    IF @n = 0 OR @n = 1
        SET @retval = 1
    ELSE
        SET @retval = @n * dbo.Factorial(@n - 1)

    RETURN @retval
END;

And, here are some tests for the function:

-- testing the functions for multiple values
SELECT dbo.Factorial (0) -- returns 1
SELECT dbo.Factorial (1) -- returns 1
SELECT dbo.Factorial (2) -- returns 2
SELECT dbo.Factorial (3) -- returns 6
SELECT dbo.Factorial (4) -- returns 24
SELECT dbo.Factorial (5) -- returns 120
SELECT dbo.Factorial (20) -- returns 2432902008176640000
SELECT dbo.Factorial (21) -- returns an error for all bigger values

As can be seen, the function will return an error for values bigger than 20, output limited by the data type used. One can build further constraints to test for the maximum value.

Use the following code to drop the function as needed:

-- dropping the function
DROP FUNCTION IF EXISTS dbo.Factorial 

Function's implementation has however another important caveat. If negative numbers are provided as parameters, the following error message is thrown:

"Msg 217, Level 16, State 1, Line 1, Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32)."

The final function can be defined as:

-- factorial function adapted (final)
CREATE FUNCTION dbo.Factorial (@n INT)
RETURNS BIGINT
AS
BEGIN
    DECLARE @retval as bigint;

    IF @n BETWEEN 0 AND 20
        IF @n = 0 OR @n = 1
            SET @retval = 1
        ELSE
            SET @retval = @n * dbo.Factorial(@n - 1)

    RETURN @retval
END;

One can linearize the function as a loop, though the limitation with the maximum value still exists:

Prompt:
Rewrite the factorial function using a loop.

Output (M365 Copilot):

-- non-recursive function to calculate the factorial of a number using a loop:
CREATE FUNCTION dbo.Factorial (@n INT)
RETURNS BIGINT
AS
BEGIN
    DECLARE @result BIGINT = 1;
    DECLARE @i INT = 1;

    WHILE @i <= @n
    BEGIN
        SET @result = @result * @i;
        SET @i = @i + 1;
    END

    RETURN @result;
END

-- test the code
SELECT dbo.Factorial(5) AS FactorialResult; -- result 120

One can obtain some improvement by changing the return type and @retsult from bigint to float, though the results are always approximations. Here are the approximated results:

-- test code
SELECT dbo.Factorial(20) AS FactorialResult; -- result 2432902008176640000
SELECT dbo.Factorial(21) AS FactorialResult; -- result 5.109094217170944E+19
SELECT dbo.Factorial(22) AS FactorialResult; -- result 1.1240007277776077E+21
SELECT dbo.Factorial(23) AS FactorialResult; -- result 2.585201673888498E+22
SELECT dbo.Factorial(24) AS FactorialResult; -- result 6.204484017332394E+23
SELECT dbo.Factorial(25) AS FactorialResult; -- result 1.5511210043330986E+25
SELECT dbo.Factorial(26) AS FactorialResult; -- result 4.0329146112660565E+26
SELECT dbo.Factorial(27) AS FactorialResult; -- result 1.0888869450418352E+28
SELECT dbo.Factorial(28) AS FactorialResult; -- result 3.0488834461171384E+29
SELECT dbo.Factorial(29) AS FactorialResult; -- result 8.841761993739701E+30
SELECT dbo.Factorial(30) AS FactorialResult; -- result 2.6525285981219103E+32
SELECT dbo.Factorial(31) AS FactorialResult; -- result 8.222838654177922E+33
SELECT dbo.Factorial(32) AS FactorialResult; -- result 2.631308369336935E+35
SELECT dbo.Factorial(170) AS FactorialResult; -- result 7.257415615307994E+306
SELECT dbo.Factorial(171) AS FactorialResult; -- results in arithmetic overflow

Another structure that allows creating recursive functions are the common table expressions (CTEs).

Prompt:
Create a recursive CTE in SQL Server for calculating n factorial.

Output (M365 Copilot):

-- n factorial as CTE
WITH FactorialCTE AS (
    -- Anchor member: factorial of 0 is 1
    SELECT 0 AS n, 1 AS factorial
    UNION ALL
    -- Recursive member: factorial of n is n * factorial of (n-1)
    SELECT n + 1, (n + 1) * factorial
    FROM FactorialCTE
    WHERE n < 10 -- Change this value to calculate factorial for larger numbers
)
SELECT n, factorial
FROM FactorialCTE;

One can cast the value in the anchor member as bigint or float as above with all the constraints resulting from this:

-- n factorial as CTE
WITH FactorialCTE AS (
    -- Anchor member: factorial of 0 is 1
    SELECT 0 AS n, Cast(1 as bigint) AS factorial
    UNION ALL
    -- Recursive member: factorial of n is n * factorial of (n-1)
    SELECT n + 1, (n + 1) * factorial
    FROM FactorialCTE
    WHERE n < 20 -- Change this value to calculate factorial for larger numbers
)
SELECT n, factorial
FROM FactorialCTE;

Case 2: Approximation of π

Power laws can be used to approximate the value of π and other irrational constants. Writing such a function via recursive functions has the same constraint as above, though there are alternatives (including storing the actual value as a constant). 

Prompt:
Generate a recursive function in SQL Server to approximate the value of π using power laws.

Output (M365 Copilot):

-- approximate the value of π using this series:
CREATE FUNCTION dbo.ApproximatePi (@n INT)
RETURNS FLOAT
AS
BEGIN
    IF @n = 0
        RETURN 4.0
    ELSE
        RETURN 4.0 * POWER(-1, @n) / (2 * @n + 1) + dbo.ApproximatePi(@n - 1)
END

-- test the function
SELECT dbo.ApproximatePi(1000) AS PiApproximation;

Function's definition has the same issue as the previous example. It's needed to rewrite the function as a single return:

-- approximate the value of π using this series (rewritten)
CREATE FUNCTION dbo.ApproximatePi (@n INT)
RETURNS FLOAT
AS
BEGIN
    RETURN
        CASE 
            WHEN @n = 0 THEN 4.0
            ELSE
                4.0 * POWER(-1, @n) / (2 * @n + 1) + dbo.ApproximatePi(@n - 1)
        END
END

-- testing the function
SELECT dbo.ApproximatePi (0) -- output: 4
SELECT dbo.ApproximatePi (1) -- output: 2.666666666667
SELECT dbo.ApproximatePi (2) -- output: 3.466666666667
SELECT dbo.ApproximatePi (3) -- output: 2.8952380952389998
SELECT dbo.ApproximatePi (10) -- output: 3.232315809407
SELECT dbo.ApproximatePi (20) -- output: 3.189184782279
SELECT dbo.ApproximatePi (30) -- output: 3.1738423371919993
SELECT dbo.ApproximatePi (31) -- output: 3.110350273699999
SELECT dbo.ApproximatePi (32) -- output: 3.171888735237999
SELECT dbo.ApproximatePi (33) -- throws error

After 32 iterations the call throws the same error as above. Fortunately, such functions can be (re)written using non-recursive implementations, which is the proper way to implementation such functions:

Prompt:
Implement the function without using recursive functions.

Output (M365 Copilot):

-- approximation within loop
CREATE FUNCTION dbo.ApproximatePi (@terms INT)
RETURNS FLOAT
AS
BEGIN
    DECLARE @pi FLOAT = 0.0;
    DECLARE @i INT = 0;

    WHILE @i < @terms
    BEGIN
        SET @pi = @pi + 4.0 * POWER(-1, @i) / (2 * @i + 1);
        SET @i = @i + 1;
    END

    RETURN @pi;
END 

This implementation is more flexible and a similar approach can be used for similar series. And, here are the results for various parameters:

-- testing the function
SELECT dbo.ApproximatePi(1000) AS PiApproximation; -- result: 3.140592653840002
SELECT dbo.ApproximatePi(10000) AS PiApproximation; -- result: 3.141492653571029
SELECT dbo.ApproximatePi(100000) AS PiApproximation; -- result: 3.141582653466945
SELECT dbo.ApproximatePi(1000000) AS PiApproximation; -- result: 3.1415916535590713

The last call is close, but probably not close enough, especially when one needs to use the function in multiple places. There are more complex power series which provide better performance faster. The direct use of the pi() function is usually recommended:

-- using the pi function
SELECT pi(); -- 3.141592653589793

Moreover, some series can be reduced to formulas based on constants like π, e or Φ.

And, a final piece of code:

-- drop the function
DROP FUNCTION IF EXISTS dbo.ApproximatePi

One can use a CTE also in this case:

Prompt:
Generate a recursive function in SQL Server to approximate the value of π using power laws.

Output (M365 Copilot):

-- series via recursive CTE
WITH PiApproximationCTE AS (
    -- Anchor member: start with k = 0
    SELECT 0 AS k, CAST(4.0 AS FLOAT) AS term, CAST(4.0 AS FLOAT) AS pi_approx
    UNION ALL
    -- Recursive member: calculate the next term and add it to the approximation
    SELECT 
        k + 1,
        CAST(4.0 * POWER(-1, k + 1) / (2 * (k + 1) + 1) AS FLOAT),
        pi_approx + CAST(4.0 * POWER(-1, k + 1) / (2 * (k + 1) + 1) AS FLOAT)
    FROM PiApproximationCTE
    WHERE k < 1000 -- Change this value to increase the number of terms
)
SELECT k, pi_approx
FROM PiApproximationCTE
OPTION (MAXRECURSION 0); -- Allow for more recursion levels if needed

This time Copilot generated a different faction for calculating the same. (The output is too long to be added in here as one record is generated  for each iteration.)

Happy coding!

Previous Post <<||>> Next Post

22 December 2024

#️⃣Software Engineering: Mea Culpa (Part VI: A Look Back)

Software Engineering Series
Software Engineering Series

Looking back at my university years, I'd say that there are three teachers, respectively courses that made a considerable impact on students' lives. In the second year I learned Category Algebra, which despite the fact that it reflected past knowledge and the topics were too complex for most of us, it provided us with a unprecedented layer of abstraction that showed us that Mathematics is not what we thought it to be!

The second course was related to the Complex plane theory, a course in which, the decan of the university at those times, challenged our way of thinking about relatively basic concepts. It was a big gap between what we thought about Mathematics, and what the subject proved to be. The course was thought in a post-university year together with a course on Relativity Theory, in which even we haven't understood much about the concepts and theories, it was the first time (except the Graph theory), we saw applied Mathematics to a broader context. Please don't misunderstand me! There were many other valuable teachers and courses, though these were the three courses that made the most important impact for me!

During those times, we attended also courses on Fortran, Pascal, C++, HTML and even dBase, and, even if each programming language brought something new in the landscape, I can't say they changed how we thought about the world (some of us had similar courses during the lyceum years) and problem solving. That's what for example SQL or more generally a database-related course brought, even if I had to wait for the first MooC courses to appear. Equally important was also Scott E Page's course on Model Theory, which introduced the model-thinking approach, a structured way of thinking about models, with applicability to the theoretical and practical aspects of life.

These are the courses that anybody interested in programming and/or IT should attend! Of course, there are also courses on algorithms, optimization, linear and non-linear programming, and they bring an arsenal of concepts and techniques to think about, though, even if they might have a wide impact, I can't compare them with the courses mentioned above. A course should (ideally) change the way we think about the world to make a sensible difference! Same goes for programming and theoretical concepts too!...

Long after I graduated, I found many books and authors that I wished I had met earlier! Quotable Math reflects some of the writings I found useful, though now it seems already too late for those books to make a considerable impact! Conversely, it's never too late to find new ways to look at life, and this is what some books achieve! This is also a way of evaluating critically what we want to read or what is worth reading!

Of course, there are many courses, books or ideas out there, though if they haven't changed the way you think about life, directly or indirectly, are they worth attending, respectively reading? Conversely, if one hasn't found a new perspective brought by a topic, probably one barely scratched the surface of the subject, independently if we talk here about students or teachers. For some topics, it's probably too much to ask, though pragmatically talking, that's the intrinsic value of what we learn! 

That's a way to think about life and select the books worth reading! I know, many love reading for the sake of reading, though the value of a book, theory, story or other similar artifacts should be judged especially by the impact they have on our way of thinking, respectively on our lives. Just a few ideas that's maybe worth reflective upon... 

Previous Post <<||>> Next Post

19 March 2024

📊R Language: Drawing Function Plots (Part II - Basic Curves & Inflection Points)

For a previous post on inflection points I needed a few examples, so I thought to write the code in the R language, which I did. Here's the final output:

Examples of Inflection Points

And, here's the code used to generate the above graphic:

par(mfrow = c(2,2)) #2x2 matrix display

# Example A: Inflection point with bifurcation
curve(x^3+20, -3,3, col = "black", main="(A) Inflection Point with Bifurcation")
curve(-x^2+20, 0, 3, add=TRUE, col="blue")
text (2, 10, "f(x)=-x^2+20, [0,3]", pos=1, offset = 1) #label inflection point
points(0, 20, col = "red", pch = 19) #inflection point 
text (0, 20, "inflection point", pos=1, offset = 1) #label inflection point


# Example B: Inflection point with Up & Down Concavity
curve(x^3-3*x^2-9*x+1, -3,6, main="(B) Inflection point with Up & Down Concavity")
points(1, -10, col = "red", pch = 19) #inflection point 
text (1, -10, "inflection point", pos=4, offset = 1) #label inflection point
text (-1, -10, "concave down", pos=3, offset = 1) 
text (-1, -10, "f''(x)<0", pos=1, offset = 0) 
text (2, 5, "concave up", pos=3, offset = 1)
text (2, 5, "f''(x)>0", pos=1, offset = 0) 


# Example C: Inflection point for multiple curves
curve(x^3-3*x+2, -3,3, col ="black", ylab="x^n-3*x+2, n = 2..5", main="(C) Inflection Point for Multiple Curves")
text (-3, -10, "n=3", pos=1) #label curve
curve(x^2-3*x+2,-3,3, add=TRUE, col="blue")
text (-2, 10, "n=2", pos=1) #label curve
curve(x^4-3*x+2,-3,3, add=TRUE, col="brown")
text (-1, 10, "n=4", pos=1) #label curve
curve(x^5-3*x+2,-3,3, add=TRUE, col="green")
text (-2, -10, "n=5", pos=1) #label curve
points(0, 2, col = "red", pch = 19) #inflection point 
text (0, 2, "inflection point", pos=4, offset = 1) #label inflection point
title("", line = -3, outer = TRUE)


# Example D: Inflection Point with fast change
curve(x^5-3*x+2,-3,3, col="black", ylab="x^n-3*x+2, n = 5,7,9", main="(D) Inflection Point with Slow vs. Fast Change")
text (-3, -100, "n=5", pos=1) #label curve
curve(x^7-3*x+2, add=TRUE, col="green")
text (-2.25, -100, "n=7", pos=1) #label curve
curve(x^9-3*x+2, add=TRUE, col="brown")
text (-1.5, -100, "n=9", pos=1) #label curve
points(0, 2, col = "red", pch = 19) #inflection point 
text (0, 2, "inflection point", pos=3, offset = 1) #label inflection point

mtext("© sql-troubles@blogspot.com @sql_troubles, 2024", side = 1, line = 4, adj = 1, col = "dodgerblue4", cex = .7)
#title("Examples of Inflection Points", line = -1, outer = TRUE)

Mathematically, an inflection point is a point on a smooth (plane) curve at which the curvature changes sign and where the second derivative is 0 [1]. The curvature intuitively measures the amount by which a curve deviates from being a straight line.

In example A, the main function has an inflection point, while the second function defined only for the interval [0,3] is used to represent a descending curve (aka bifurcation) for which the same point is a maximum point.  

In example B, the function was chosen to represent an example with a concave down (for which the second derivative is negative) and a concave up (for which the second derivative is positive) section. So what comes after an inflection point is not necessarily a monotonic increasing function. 

In example C are depicted several functions based on a varying power of the first coefficient which have the same inflection point. One could have shown only the behavior of the functions after the inflection point, while before choosing only one of the functions (see example A).

In example D is the same function as in example C with varying powers of the first coefficient considered, though for higher powers than in example C. I kept the function for n=5 to offer a basis for comparison. Apparently, the strange thing is that around the inflection point the change seems to be small and linear, which is not the case. The two graphics are correct though, because as basis is considered the scale for n=5, while in C the basis is n=3 (one scales the graphic further away from the inflection point). If one adds n=3 as the first function in the example D, the new chart will resemble C. Unfortunately, this behavior can be misused to show something like being linear around the inflection point, which is not the case. 

# Example E: Inflection Point with slow vs. fast change extended
curve(x^3-3*x+2,-3,3, col="black", ylab="x^n-3*x+2, n = 3,5,7,9", main="(E) Inflection Point with Slow vs. Fast Change")
text (-3, -10, "n=3", pos=1) #label curve
curve(x^5-3*x+2,-3,3, add=TRUE, col="brown")
text (-2, -10, "n=5", pos=1) #label curve
curve(x^7-3*x+2, add=TRUE, col="green")
text (-1.5, -10, "n=7", pos=1) #label curve
curve(x^9-3*x+2, add=TRUE, col="orange")
text (-1, -5, "n=9", pos=1) #label curve
points(0, 2, col = "red", pch = 19) #inflection point 
text (0, 2, "inflection point", pos=3, offset = 1) #label inflection point

Comments:
(1) I cheated a bit calculating the second derivative manually, which is an easy task for polynomials. There seems to be methods for calculating the inflection point, though the focus was on providing the examples. 
(2) The examples C and D could have been implemented as part of a loop, though I needed anyway to add the labels for each curve individually. Here's the modified code to support a loop:

# Example F: Inflection Point with slow vs. fast change with loop
n <- list(5,7,9)
color <- list("brown", "green", "orange")

curve(x^3-3*x+2,-3,3, col="black", ylab="x^n-3*x+2, n = 3,5,7,9", main="(F) Inflection Point with Slow vs. Fast Change")
for (i in seq_along(n))
{
ind <- as.numeric(n[i])
curve(x^ind-3*x+2,-3,3, add=TRUE, col=toString(color[i]))
}

text (-3, -10, "n=3", pos=1) #label curve
text (-2, -10, "n=5", pos=1) #label curve
text (-1, -5, "n=9", pos=1) #label curve
text (-1.5, -10, "n=7", pos=1) #label curve

Happy coding!

Previous Post <<||>> Next Post

𖣯Strategic Management: Inflection Points and the Data Mesh (Quote of the Day)

Strategic Management
Strategic Management Series

"Data mesh is what comes after an inflection point, shifting our approach, attitude, and technology toward data. Mathematically, an inflection point is a magic moment at which a curve stops bending one way and starts curving in the other direction. It’s a point that the old picture dissolves, giving way to a new one. [...] The impacts affect business agility, the ability to get value from data, and resilience to change. In the center is the inflection point, where we have a choice to make: to continue with our existing approach and, at best, reach a plateau of impact or take the data mesh approach with the promise of reaching new heights." [1]

I tried to understand the "metaphor" behind the quote. As the author through another quote pinpoints, the metaphor is borrowed from Andrew Groove:

"An inflection point occurs where the old strategic picture dissolves and gives way to the new, allowing the business to ascend to new heights. However, if you don’t navigate your way through an inflection point, you go through a peak and after the peak the business declines. [...] Put another way, a strategic inflection point is when the balance of forces shifts from the old structure, from the old ways of doing business and the old ways of competing, to the new." [2]

The second part of the quote clarifies the role of the inflection point - the shift from a structure, respectively organization or system to a new one. The inflection point is not when we take a decision, but when the decision we took, and the impact shifts the balance. If the data mesh comes after the inflection point (see A), then there must be some kind of causality that converges uniquely toward the data mesh, which is questionable, if not illogical. A data mesh eventually makes sense after organizations reached a certain scale and thus is likely improbable to be adopted by small to medium businesses. Even for large organizations the data mesh may not be a viable solution if it doesn't have a proven record of success. 

I could understand if the author would have said that the data mesh will lead to an inflection point after its adoption, as is the case of transformative/disruptive technologies. Unfortunately, the tracking record of BI and Data Analytics projects doesn't give many hopes for such a magical moment to happen. Probably, becoming a data-driven organization could have such an effect, though for many organizations the effects are still far from expectations. 

There's another point to consider. A curve with inflection points can contain up and down concavities (see B) or there can be multiple curves passing through an inflection point (see C) and the continuation can be on any of the curves.

Examples of Inflection Points [3]

The change can be fast or slow (see D), and in the latter it may take a long time for change to be perceived. Also [2] notes that the perception that something changed can happen in stages. Moreover, the inflection point can be only local and doesn't describe the future evolution of the curve, which to say that the curve can change the trajectory shortly after that. It happens in business processes and policy implementations that after a change was made in extremis to alleviate an issue a slight improvement is recognized after which the performance decays sharply. It's the case of situations in which the symptoms and not the root causes were addressed. 

More appropriate to describe the change would be a tipping point, which can be defined as a critical threshold beyond which a system (the organization) reorganizes/changes, often abruptly and/or irreversible.

Previous Post <<||>> Next Post

References:
[1] Zhamak Dehghani (2021) Data Mesh: Delivering Data-Driven Value at Scale (book review)
[2] Andrew S Grove (1988) "Only the Paranoid Survive: How to Exploit the Crisis Points that Challenge Every Company and Career"
[3] SQL Troubles (2024) R Language: Drawing Function Plots (Part II - Basic Curves & Inflection Points) (link)

18 October 2022

💎SQL Reloaded: Successive Price Increases/Discounts via Windowing Functions and CTEs

I was trying today to solve a problem that apparently requires recursive common table expressions, though they are not (yet) available in Azure Synapse serverless SQL pool. The problem can be summarized in the below table definition, in which given a set of Products with an initial Sales price, is needed to apply Price Increases successively for each Cycle. The cumulated increase is simulated in the last column for each line. 

Unfortunately, there is no SQL Server windowing function that allows multiplying incrementally the values of a column (similar as the running total works). However, there’s a mathematical trick that can be used to transform a product into a sum of elements by applying the Exp (exponential) and Log (logarithm) functions (see Solution 1), and which frankly is more elegant than applying CTEs (see Solution 2). 

-- create table with test data
SELECT *
INTO dbo.ItemPrices
FROM (VALUES ('ID001', 1000, 1, 1.02, '1.02')
, ('ID001', 1000, 2, 1.03, '1.02*1.03')
, ('ID001', 1000, 3, 1.03, '1.02*1.03*1.03')
, ('ID001', 1000, 4, 1.04, '1.02*1.03*1.03*1.04')
, ('ID002', 100, 1, 1.02, '1.02')
, ('ID002', 100, 2, 1.03, '1.02*1.03')
, ('ID002', 100, 3, 1.04, '1.02*1.03*1.04')
, ('ID002', 100, 4, 1.05, '1.02*1.03*1.04*1.05')
) DAT (ItemId, SalesPrice, Cycle, PriceIncrease, CumulatedIncrease)

-- reviewing the data
SELECT *
FROM dbo.ItemPrices

-- Solution 1: new sales prices with log & exp
SELECT ItemId
, SalesPrice
, Cycle
, PriceIncrease
, EXP(SUM(Log(PriceIncrease)) OVER(PARTITION BY Itemid ORDER BY Cycle)) CumulatedIncrease
, SalesPrice * EXP(SUM(Log(PriceIncrease)) OVER(PARTITION BY Itemid ORDER BY Cycle)) NewSalesPrice
FROM dbo.ItemPrices

-- Solution 2: new sales prices with recursive CTE
;WITH CTE 
AS (
-- initial record
SELECT ITP.ItemId
, ITP.SalesPrice
, ITP.Cycle
, ITP.PriceIncrease
, cast(ITP.PriceIncrease as decimal(38,6)) CumulatedIncrease
FROM dbo.ItemPrices ITP
WHERE ITP.Cycle = 1
UNION ALL
-- recursice part
SELECT ITP.ItemId
, ITP.SalesPrice
, ITP.Cycle
, ITP.PriceIncrease
, Cast(ITP.PriceIncrease * ITO.CumulatedIncrease as decimal(38,6))  CumulatedIncrease
FROM dbo.ItemPrices ITP
    JOIN CTE ITO
	  ON ITP.ItemId = ITO.ItemId
	 AND ITP.Cycle-1 = ITO.Cycle
)
-- final result
SELECT ItemId
, SalesPrice
, Cycle
, PriceIncrease
, CumulatedIncrease
, SalesPrice * CumulatedIncrease NewSalesPrice
FROM CTE
ORDER BY ItemId
, Cycle


-- validating the cumulated price increases (only last ones)
SELECT 1.02*1.03*1.03*1.04 
, 1.02*1.03*1.04*1.05

-- cleaning up
DROP TABLE IF EXISTS dbo.ItemPrices

Notes:
1. The logarithm in SQL Server’s implementation works only with positive numbers!
2. For simplification I transformed percentages (e.g. 1%) in values that are easier to multitply with (e.g. 1.01). The solution can be easily modified to consider discounts.
3. When CTEs are not available, one is forced to return to the methods used in SQL Server 2000 (I've been there) and thus use temporary tables or table variables with loops. Moreover, the logic can be encapsulated in multi-statement table-valued functions (see example), unfortunately, another feature not (yet) supported by serverless SQL pools. 
4. Unfortunately, STRING_AGG, which concatenates values across rows, works only with a GROUP BY clause. Anyway, its result is useless without the availability of a Eval function in SQL (see example), however the Expr function available in data flows could be used as workaround.
4. Even if I thought about the use of logarithms for transforming the product into a sum, I initially ignored the idea, thinking that the solution would be too complex to implement. So, the credit goes to another blogpost. Kudos!
5. The queries work also in a SQL databases in Microsoft Fabric. Just replace the Sales with SalesLT schema (see post, respectively GitHub repository with the changed code).

Happy coding!

22 December 2019

📊R Language: Drawing Function Plots (Part I - Basic Curves)

Besides the fact that is free, one of the advantages of the R language is that it allows drawing function plots with simple commands. All one needs is software package like Microsoft R Open and one is set to go.

For example, to draw the function f(x) = x^3-3*x+2 all one needs to do is to type the following command into the console:

curve(x^3-3*x+2, -3,3)



One can display a second function into the same chart by adding a second command and repeat the process further as needed:

curve(x^2-3*x+2, add=TRUE, col="blue")
curve(x^4-3*x+2, add=TRUE, col="red")
curve(x^5-3*x+2, add=TRUE, col="green")


As one can see a pattern emerges already …

One could easily display the plots in their one section by defining an array on the display device, allowing thus to focus on special characteristics of the functions:

par(mfrow=c(2,2))
curve(x^3-3*x+2, -3,3)
curve(x^2-3*x+2, -3,3, col="blue")
curve(x^4-3*x+2, -3,3, col="red")
curve(x^5-3*x+2, -3,3, col="green")



One can be creative and display the functions using a loop:

par(mfrow=c(2,2))
for(n in c(2:5)){
curve(x^n-3*x+2, -3,3)
}


Similarly, one can plot trigonometric functions:

par(mfrow=c(2,3))
curve(sin(x),-pi,pi) 
curve(cos(x),-pi,pi) 
curve(tan(x),-pi,pi) 
curve(1/tan(x),-pi,pi) 
curve(asin(x),-1,1)
curve(acos(x),-1,1)


The possibilities are endless. For complex functions one can include the function into an r function:

myFunction<- function(x) sin(cos(x)*exp(-x/2))
curve(myFunction, -15, 15, n=1000)


For the SQL Server developers, what’s even greater is the possibility of using Management Studio for the same, though that’s a topic for another post.

 
Happy coding!

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 25 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.