Showing posts with label syseng. Show all posts
Showing posts with label syseng. Show all posts

19 March 2024

Strategic Management: Inflection Points and the Data Mesh (Quote of the Day)

Strategic Management
Strategic Management Series

"Data mesh is what comes after an inflection point, shifting our approach, attitude, and technology toward data. Mathematically, an inflection point is a magic moment at which a curve stops bending one way and starts curving in the other direction. It’s a point that the old picture dissolves, giving way to a new one. [...] The impacts affect business agility, the ability to get value from data, and resilience to change. In the center is the inflection point, where we have a choice to make: to continue with our existing approach and, at best, reach a plateau of impact or take the data mesh approach with the promise of reaching new heights." [1]

I tried to understand the "metaphor" behind the quote. As the author through another quote pinpoints, the metaphor is borrowed from Andrew Groove:

"An inflection point occurs where the old strategic picture dissolves and gives way to the new, allowing the business to ascend to new heights. However, if you don’t navigate your way through an inflection point, you go through a peak and after the peak the business declines. [...] Put another way, a strategic inflection point is when the balance of forces shifts from the old structure, from the old ways of doing business and the old ways of competing, to the new. Before" [2]

The second part of the quote clarifies the role of the inflection point - the shift from a structure, respectively organization or system to a new one. The inflection point is not when we take a decision, but when the decision we took, and the impact shifts the balance. If the data mesh comes after the inflection point (see A), then there must be some kind of causality that converges uniquely toward the data mesh, which is questionable, if not illogical. A data mesh eventually makes sense after organizations reached a certain scale and thus is likely improbable to be adopted by small to medium businesses. Even for large organizations the data mesh may not be a viable solution if it doesn't have a proven record of success. 

I could understand if the author would have said that the data mesh will lead to an inflection point after its adoption, as is the case of transformative/disruptive technologies. Unfortunately, the tracking record of BI and Data Analytics projects doesn't give many hopes for such a magical moment to happen. Probably, becoming a data-driven organization could have such an effect, though for many organizations the effects are still far from expectations. 

There's another point to consider. A curve with inflection points can contain up and down concavities (see B) or there can be multiple curves passing through an inflection point (see C) and the continuation can be on any of the curves.

Examples of Inflection Points [3]

The change can be fast or slow (see D), and in the latter it may take a long time for change to be perceived. Also [2] notes that the perception that something changed can happen in stages. Moreover, the inflection point can be only local and doesn't describe the future evolution of the curve, which to say that the curve can change the trajectory shortly after that. It happens in business processes and policy implementations that after a change was made in extremis to alleviate an issue a slight improvement is recognized after which the performance decays sharply. It's the case of situations in which the symptoms and not the root causes were addressed. 

More appropriate to describe the change would be a tipping point, which can be defined as a critical threshold beyond which a system (the organization) reorganizes/changes, often abruptly and/or irreversible.

Previous Post <<||>> Next Post

References:
[1] Zhamak Dehghani (2021) Data Mesh: Delivering Data-Driven Value at Scale (book review)
[2] Andrew S Grove (1988) "Only the Paranoid Survive: How to Exploit the Crisis Points that Challenge Every Company and Career"
[3] SQL Troubles (2024) R Language: Drawing Function Plots (Part II - Basic Curves & Inflection Points) (link)

17 March 2024

Business Intelligence: Data Products (Part II: The Complexity Challenge)

Business Intelligence
Business Intelligence Series

Creating data products within a data mesh resumes in "partitioning" a given set of inputs, outputs and transformations to create something that looks like a Lego structure, in which each Lego piece represents a data product. The word partition is improperly used as there can be overlapping in terms of inputs, outputs and transformations, though in an ideal solution the outcome should be close to a partition.

If the complexity of inputs and outputs can be neglected, even if their number could amount to a big number, not the same can be said about the transformations that must be performed in the process. Moreover, the transformations involve reengineering the logic built in the source systems, which is not a trivial task and must involve adequate testing. The transformations are a must and there's no way to avoid them. 

When designing a data warehouse or data mart one of the goals is to keep the redundancy of the transformations and of the intermediary results to a minimum to minimize the unnecessary duplication of code and data. Code duplication becomes usually an issue when the logic needs to be changed, and in business contexts that can happen often enough to create other challenges. Data duplication becomes an issue when they are not in synch, fact derived from code not synchronized or with different refresh rates.

Building the transformations as SQL-based database objects has its advantages. There were many attempts for providing non-SQL operators for the same (in SSIS, Power Query) though the solutions built based on them are difficult to troubleshoot and maintain, the overall complexity increasing with the volume of transformations that must be performed. In data mashes, the complexity increases also with the number of data products involved, especially when there are multiple stakeholders and different goals involved (see the challenges for developing data marts supposed to be domain-specific). 

To growing complexity organizations answer with complexity. On one side the teams of developers, business users and other members of the governance teams who together with the solution create an ecosystem. On the other side, the inherent coordination and organization meetings, managing proposals, the negotiation of scope for data products, their design, testing, etc.  The more complex the whole ecosystem becomes, the higher the chances for systemic errors to occur and multiply, respectively to create unwanted behavior of the parties involved. Ecosystems are challenging to monitor and manage. 

The more complex the architecture, the higher the chances for failure. Even if some organizations might succeed, it doesn't mean that such an endeavor is for everybody - a certain maturity in building data architectures, data-based artefacts and managing projects must exist in the organization. Many organizations fail in addressing basic analytical requirements, why would one think that they are capable of handling an increased complexity? Even if one breaks the complexity of a data warehouse to more manageable units, the complexity is just moved at other levels that are more difficult to manage in ensemble. 

Being able to audit and test each data product individually has its advantages, though when a data product becomes part of an aggregate it can be easily get lost in the bigger picture. Thus, is needed a global observability framework that allows to monitor the performance and health of each data product in aggregate. Besides that, there are needed event brokers and other mechanisms to handle failure, availability, security, etc. 

Data products make sense in certain scenarios, especially when the complexity of architectures is manageable, though attempting to redesign everything from their perspective is like having a hammer in one's hand and treating everything like a nail.

Previous Post <<||>> Next Post

12 March 2024

Systems Engineering: A Play of Problems (Much Ado about Nothing)

Disclaimer: This post was created just for fun. No problem was hurt or solved in the process! 
Updated: 12-Jun-2024

On Problems

Everybody has at least a problem. If somebody doesn’t have a problem, he’ll make one. If somebody can't make a problem, he can always find a problem. One doesn't need to search long for finding a problem. Looking for a problem one sees problems. 

Not having a problem can easily become a problem. It’s better to have a problem than none. The none problem is undefinable, which makes it a problem. 

Avoiding a problem might lead you to another problem. Some problems are so old, that's easier to ignore them. 

In every big problem there’s a small problem trying to come out. Most problems can be reduced to smaller problems. A small problem may hide a bigger problem. 

It’s better to solve a problem when is still small, however problems can be perceived only when they grow bigger. 

In the neighborhood of a problem there’s another problem getting closer. Problems tend to attract each other. 

Between two problems there’s enough place for a third to appear. The shortest path between two problems is another problem. 

Two problems that appear together in successive situations might be the parts of the same problem. 

A problem is more than the sum of its parts.

Any problem can be simplified to the degree that it becomes another problem. 

The complementary of a problem is another problem. At the intersection/reunion of two problems lies another problem.

The inverse of a problem is another problem more complex than the initial problem.

Defining a problem correctly is another problem. A known problem doesn’t make one problem less. 

When a problem seems to be enough, a second appears. A problem never comes alone.  The interplay of the two problems creates a third.

Sharing the problems with somebody else just multiplies the number of problems. 

Problems multiply beyond necessity. Problems multiply beyond our expectations. Problems multiply faster than we can solve. 

Having more than one problem is for many already too much. Between many big problems and an infinity of problems there seem to be no big difference. 

Many small problems can converge toward a bigger problem. Many small problems can also diverge toward two bigger problems. 

When neighboring problems exist, people tend to isolate them. Isolated problems tend to find other ways to surprise.

Several problems aggregate and create bigger problems that tend to suck within the neighboring problems.

If one waits long enough some problems will solve themselves or it will get bigger. Bigger problems exceed one's area of responsibility. 

One can get credit for a self-created problem. It takes only a good problem to become famous.

A good problem can provide a lifetime. A good problem has the tendency to kick back where it hurts the most. One can fall in love with a good problem. 

One should not theorize before one has a (good) problem. A problem can lead to a new theory, while a theory brings with it many more problems. 

If the only tool you have is a hammer, every problem will look like a nail. (paraphrasing Abraham H Maslow)

Any field of knowledge can be covered by a set of problems. A field of knowledge should be learned by the problems it poses.

A problem thoroughly understood is always fairly simple, but unfairly complex. (paraphrasing Charles F Kettering)

The problem solver created usually the problem. 

Problem Solving

Break a problem in two to solve it easier. Finding how to break a problem is already another problem. Deconstructing a problem to its parts is no guarantee for solving the problem.

Every problem has at least two solutions from which at least one is wrong. It’s easier to solve the wrong problem. 

It’s easier to solve a problem if one knows the solution already. Knowing a solution is not a guarantee for solving the problem.

Sometimes a problem disappears faster than one can find a solution. 

If a problem has two solutions, more likely a third solution exists. 

Solutions can be used to generate problems. The design of a problem seldom lies in its solutions. 

The solution of a problem can create at least one more problem. 

One can solve only one problem at a time. 

Unsolvable problems lead to problematic approximations. There's always a better approximation, one just needs to find it. One needs to be o know when to stop searching for an approximation. 

There's not only a single way for solving a problem. Finding another way for solving a problem provides more insight into the problem. More insight complicates the problem unnecessarily. 

Solving a problem is a matter of perspective. Finding the right perspective is another problem.

Solving a problem is a matter of tools. Searching for the right tool can be a laborious process. 

Solving a problem requires a higher level of consciousness than the level that created it. (see Einstein) With the increase complexity of the problems one an run out of consciousness.

Trying to solve an old problem creates resistance against its solution(s). 

The premature optimization of a problem is the root of all evil. (paraphrasing Donald Knuth)

A great discovery solves a great problem but creates a few others on its way. (paraphrasing George Polya)

Solving the symptoms of a problem can prove more difficult that solving the problem itself.

A master is a person who knows the solutions to his problems. To learn the solutions to others' problems he needs a pupil. 

"The final test of a theory is its capacity to solve the problems which originated it." (George Dantzig) It's easier to theorize if one has a set of problems.

A problem is defined as a gap between where you are and where you want to be, though nobody knows exactly where he is or wants to be.

Complex problems are the problems that persist - so are minor ones.

"The problems are solved, not by giving new information, but by arranging what we have known since long." (Ludwig Wittgenstein, 1953) Some people are just lost in rearranging. 

Solving problems is a practical skill, but impractical endeavor. (paraphrasing George Polya) 

"To ask the right question is harder than to answer it." (Georg Cantor) So most people avoid asking the right question.

Solve more problems than you create.

They Said It

"A great many problems do not have accurate answers, but do have approximate answers, from which sensible decisions can be made." (Berkeley's Law)

"A problem is an opportunity to grow, creating more problems. [...] most important problems cannot be solved; they must be outgrown." (Wayne Dyer)

"A system represents someone's solution to a problem. The system doesn't solve the problem." (John Gall, 1975)

"As long as a branch of science offers an abundance of problems, so long is it alive." (David Hilbert)

"Complex problems have simple, easy to understand, wrong answers." [Grossman's Misquote]

"Every solution breeds new problems." [Murphy's laws]

"Given any problem containing n equations, there will be n+1 unknowns." [Snafu]

"I have not seen any problem, however complicated, which, when you looked at it in the right way, did not become still more complicated." (Paul Anderson)

"If a problem causes many meetings, the meetings eventually become more important than the problem." (Hendrickson’s Law)

"If you think the problem is bad now, just wait until we’ve solved it." (Arthur Kasspe) [Epstein’s Law]

"Inventing is easy for staff outfits. Stating a problem is much harder. Instead of stating problems, people like to pass out half- accurate statements together with half-available solutions which they can't finish and which they want you to finish." [Katz's Maxims]

"It is better to do the right problem the wrong way than to do the wrong problem the right way." (Richard Hamming)

"Most problems have either many answers or no answer. Only a few problems have a single answer." [Berkeley's Law]

"Problems worthy of attack prove their worth by fighting back." (Piet Hein)

Rule of Accuracy: "When working toward the solution of a problem, it always helps if you know the answer."
Corollary: "Provided, of course, that you know there is a problem."

"Some problems are just too complicated for rational logical solutions. They admit of insights, not answers." (Jerome B Wiesner, 1963)

"Sometimes, where a complex problem can be illuminated by many tools, one can be forgiven for applying the one he knows best." [Screwdriver Syndrome]

"The best way to escape from a problem is to solve it." (Brendan Francis)

"The chief cause of problems is solutions." [Sevareid's Law]

"The first step of problem solving is to understand the existing conditions." (Kaoru Ishikawa)

"The human race never solves any of its problems, it only outlives them." (David Gerrold)

"The most fruitful research grows out of practical problems."  (Ralph B Peck)

"The problem-solving process will always break down at the point at which it is possible to determine who caused the problem." [Fyffe's Axiom]

"The worst thing you can do to a problem is solve it completely." (Daniel Kleitman)

"The easiest way to solve a problem is to deny it exists." (Isaac Asimov)

"The solution to a problem changes the problem." [Peers's Law]

"There is a solution to every problem; the only difficulty is finding it." [Evvie Nef's Law]

"There is no mechanical problem so difficult that it cannot be solved by brute strength and ignorance. [William's Law]

"Today's problems come from yesterday’s 'solutions'." (Peter M Senge, 1990)

"While the difficulties and dangers of problems tend to increase at a geometric rate, the knowledge and manpower qualified to deal with these problems tend to increase linearly." [Dror's First Law]

"You are never sure whether or not a problem is good unless you actually solve it." (Mikhail Gromov)

More quotes on Problem solving at QuotableMath.blogpost.com.

Resources:
Murphy's laws and corollaries (link)

21 February 2024

Business Intelligence: A Software Engineer's Perspective IV (The Loom of Interactions)

Business Intelligence Series
Business Intelligence Series 

The process of developing or creating a report is quite simple - there's a demand for data, usually a business problem, the user (aka requestor) defines a set of requirements, the data professional writes one or more queries to address the requirements, which are then used to build one or more reports. The report(s) is/are reviewed by the requestor and with this the process should be over in most of the cases. However, this is rather the exception - a long series of changes over multiple iterations are usually necessary, the queries and the reports get modified and even rewritten until they reach the final form, lot of effort being wasted in the process on both sides.

Common practices for improving the process behind resume to assuring that the requirements are complete and understood upfront, that best practices are followed, that the user gets an early review of the work and that there's a continuous communication, that process' performance is monitored, that controls are in place, etc. Standardizing the process helps to reduce the number of iterations, but only by a factor. Unfortunately, the bigger issue - the knowledge gap - is often ignored.

There's lot of literature on problem solving, on what steps to follow, on how to define the problem, what aspects should be considered, etc. Recipes are good when one knows how to follow them, respectively how to cook, and that can be a tedious process. It is said that framing the right problem is half the way to its solving, and that's so true. Part of the bigger issue is that users need data to better understand the problem, however the drives can be different - sometimes is problem's complexity, while other times the need is apparent, only with the first set of data the users start thinking seriously about the problem. 

So, the first major gap is between the problem and user's knowledge about the problem. Experience and theory can help reduce the gap, however the most important progress comes when the user understands the data behind the various processes that overlap with the problem. Sometimes, it's enough to explore the data visually, while other times deeper explorations are needed. Data literacy is important, though more important are the exposure to the data and problems of different variety and complexity, respectively having the time for this. 

The second gap concerns the data professional - building the data model and the logic for the report requires domain knowledge. The level of knowledge depends from case to case, and typically what one doesn't know has the biggest impact. A data professional can help to the degree of the information, respectively knowledge he has about the business. The expectation to provide a report based on a set of fields might be valid for simple requirements, though the more complex a problem, the more domain knowledge is needed. Moreover, the data professional might need to reengineer the logic from the source system, which can prove challenging only by looking at the data.

Ideally, the two parties should work together starting with problem's framing and build common ground while covering the knowledge gaps on both sides. Of course, the user doesn't need to dive into the technical knowledge unless the organization leverages this interaction further by adopting the data citizen mindset. Such interactions can help to build trust, respectively a basis for further collaboration. Conversely, the more isolated the two parties, the higher the chances for more iterations to occur. 

Covering the knowledge gaps might look like a redistribution of the effort, though by keeping the status quo there is little chance for growth!

14 February 2024

Business Intelligence: A One Man Show VI (The Lakehouse Perspective)

Business Intelligence Suite
Business Intelligence Suite

Continuing the ideas on Christopher Laubenthal's article "Why one person can't do everything in the data space" [1] and why his analogy between a college's functional structure and the core data roles is poorly chosen. In the last post I mentioned as a first argument that the two constructions have different foundations.

Secondly, it's a matter of construction, namely the steps used to arrive from one state to another. Indeed, there's somebody who builds the data warehouse (DWH), somebody who builds the ETL/ELT pipelines for moving the data from the sources to the DWH, somebody who builds the sematic data model that includes business related logic, respectively people who tap into the data for reporting, data visualizations, data science projects, and whatever is still needed in the organization. On top of this, there should be somebody who manages the DWH. I haven't associated any role to them because one of the core roles can be responsible for more than one step. 

In the case of a lakehouse, it is the data engineer who moves the data from the various data sources to the data lake if that doesn't happen already by design or configuration. As per my understanding the data engineers are the ones who design and build the new lakehouse, move transform and manage the data as required. The Data Analysts, Data Scientist and maybe some Information Designers can tap then into the data. However, the DWH and the lakehouse(s) are technologies that facilitate their work. They can still do their work also if the same data are available by other means.

In what concerns the dorm analogy, the verbs were chosen to match the way data warehouses (DWH) or lakehouses are built, though the congruence of the steps is questionable. One could have compared the number of students with the numbers of data entities, but not with the data themselves. Usually, students move by themselves and occupy the places. The story tellers, the assistants and researchers are independent on whether the students are hosted in the dorm or not. Therefore, the analogy seems to be a bit forced. 

Frankly, I covered all the steps except the ones related to Data Science by myself for both described scenarios. It helped that I knew the data from the data sources and the transformations rules I had to apply, respectively the techniques needed for moving and transforming the data, and the volume of data entities was manageable somehow. Conversely, 1-2 more resources in the area of data analysis and visualizations could have helped to bring more value to the business. 

This opens the challenge of scale and it has do to with systems engineering and how the number of components and the interactions between them increase systems' complexity and the demand for managing the respective components. In the simplest linear models, for each multiplier of a certain number of components of the same type from the organization, the number of resources managing the respective layer matches to some degree the multiplier. E.g. if a data engineer can handle x data entities in a unit of time, then for hand n*x components are more likely at least n data engineers required. However, the output of n components is only a fraction of the n*x given the dependencies existing between components and other constraints.

An optimization problem resumes in finding out what data roles to chose to cover an organization's needs. A one man show can be the best solution for small organizations, though unless there's a good division of labor, bringing a second person will make the throughput slower until will become faster.

Previous Post <<|||>> Next Post

Resources:
[1] Christopher Laubenthal (2024) "Why One Person Can’t Do Everything In Data" (link)

13 February 2024

Business Intelligence: A One Man Show II (In the Cusps of Complexity)

Business Intelligence Series
Business Intelligence Series

I watched today on YouTube Power BI Tips' "One Person to Do Everything" episode I missed last week. The main topic is based on Christopher Laubenthal's article "Why one person can't do everything in the data space". Author's arguments are based on an analogy between the various data areas and a college's functional structure. Reading the article, I must say that it takes a poorly chosen analogy to mess messy things more!

One of the most confusing things is that there are so many data-related context-dependent roles with considerable overlapping, that it becomes more and more difficult to understand what they cover. The author considers the roles of Data Architect, Data Engineer, Database Administrator (DBA), Data Analyst, Information Designer and Data Scientist. However, to the every aspect of a data architecture there are also developers on the database (backend) and reporting side (front-end). Conversely, there are other data professionals on the management side for the various knowledge areas of Data Management: Data Governance, Data Strategy, Data Security, Data Operations, etc. There are also roles at the border between the business and the technical side like Data Stewards, Business Analysts, Data Citizen, etc. 

There are two main aspects here. According to the historical perspective, many of these roles appeared when a new set of requirements or a new layer appeared in the architecture. Firstly, it was maybe the DBA, who was supposed to primarily administer the database. Being a keeper of the data and having some knowledge of the data entities, it was easy for him/her to export data for the various reporting needs. In time such activities were taken over by a second category of data professionals. Then the data were moved to Decision Support Systems and later to Data Warehouses and Data Lakes/Lakehoses, this evolution requiring other professionals to address the challenges of each layer. Every activity performed on the data requires a certain type of knowledge that can result in the end in a new denomination. 

The second perspective results from the management of data and the knowledge areas associated with it. If in small organizations with one or two systems in place one doesn't need to talk about Data Operations, in big organizations, where a data center or something similar is maybe in place, Data Operations can easily become a topic on its own, a management structure needing to be in place for its "effective and efficient" management. And the same can happen in the other knowledge areas and their interaction with the business. It's an inherent tendency of answering to complexity with complexity, which on the long term can be in the detriment of any business. In extremis, organizations tend to have a whole team in each area, which can further increase the overall complexity by a small to not that small magnitude. 

Fortunately, one of the benefits of technological advancement is that much of the complexity can be moved somewhere else, and these are the areas where the cloud brings the most advantages. Parts or all architecture can be deployed into the cloud, being managed by cloud providers and third-parties on an on-demand basis at stable costs. Moreover, with the increasing maturity and integration of the various layers, the impact of the various roles in the overall picture is reduced considerably as areas like governance, security or operations are built-in as services, requiring thus less resources. 

With Microsoft Fabric, all the data needed for reporting becomes in theory easily available in the OneLake. Unfortunately, there is another type of complexity that is dumped on other professionals' shoulders and these aspects need to be furthered considered. 

Previous Post <<|||>> Next Post

Resources:
[1] Christopher Laubenthal (2024) "Why One Person Can’t Do Everything In Data" (link)
[2] Power BI tips (2024) Ep.292: One Person to Do Everything (link)


02 January 2024

Systems Engineering: Never-Ending Stories in Praxis (Quote of the Day)

Systems Engineering
Systems Engineering Cycle

"[…] the longer one works on […] a project without actually concluding it, the more remote the expected completion date becomes. Is this really such a perplexing paradox? No, on the contrary: human experience, all-too-familiar human experience, suggests that in fact many tasks suffer from similar runaway completion times. In short, such jobs either get done soon or they never get done. It is surprising, though, that this common conundrum can be modeled so simply by a self-similar power law." (Manfred Schroeder, "Fractals, Chaos, Power Laws Minutes from an Infinite Paradise", 1990)

I found the above quote while browsing through Manfred Schroeder's book on fractals, chaos and power laws, book that also explores similar topics like percolation, recursion, randomness, self-similarity, determinism, etc. Unfortunately, when one goes beyond the introductory notes of each chapter, the subjects require more advanced knowledge of Mathematics, respectively further analysis and exploration of the models behind. Despite this, the book is still an interesting read with ideas to ponder upon.

I found myself a few times in the situation described above - working on a task that didn't seem to end, despite investing more effort, respectively approaching the solution from different angles. The reasons residing behind such situations were multiple, found typically beyond my direct area of influence and/or decision. In a systemic setup, there are parts of a system that find themselves in opposition, different forces pulling in distinct directions. It can be the case of interests, goals, expectations or solutions which compete or make subject to politics. 

For example, in Data Analytics or Data Science there are high chances that no progress can be made beyond a certain point without addressing first the quality of data or design/architectural issues. The integrations between applications, data migrations and other solutions which heavily rely on data are sensitive to data quality and architecture's reliability. As long the source of variability (data, data generators) is not stabilized, providing a stable solution has low chances of success, no matter how much effort is invested, respectively how performant the tools are. 

Some of the issues can be solved by allocating resources to handle their implications. Unfortunately, some organizations attempt to solve such issues by allocating the resources in the wrong areas or by addressing the symptoms instead of taking a step back and looking systemically at the problem, analyzing and modeling it accordingly. Moreover, there are organizations which refuse to recognize they have a problem at all! In the blame game, it's much easier to shift the responsibility on somebody else's shoulders. 

Defining the right problem to solve might prove more challenging than expected and usually this requires several iterations in which the knowledge obtained in the process is incorporated gradually. Other times, one attempts to solve the correct problem by using the wrong methodology, architecture and/or skillset. The difference between right and wrong depends on the context, and even between similar problems and factors the context can make a considerable difference.

The above quote can be corroborated with situations in which perfection is demanded. In IT and management setups, excellence is often confounded with perfection, the latter being impossible to achieve, though many managers take it as the norm. There's a critical point above which the effort invested outweighs solution's plausibility by an exponential factor.  

Another source for unending effort is when requirements change frequently in a swift manner - e.g. the rate with which changes occur outweighs the progress made for finding a solution. Unless the requirements are stabilized, the effort spirals towards the outside (in an exponential manner). 

Finally, there are cases with extreme character, in which for example the complexity of the task outweighs the skillset and/or the number of resources available. Moreover, there are problems which accept plausible solutions, though there are also problems (especially systemic ones) which don't have stable or plausible solutions. 

Behind most of such cases lie factors that tend to have chaotic behavior that occurs especially when the environments are far from favorable. The models used to depict such relations are nonlinear, sometimes expressed as power laws - one quantity varying as a power of another, with the variation increasing with each generation. 

Previous Post <<||>> Next Post

Resources:
[1] Manfred Schroeder, "Fractals, Chaos, Power Laws Minutes from an Infinite Paradise", 1990 (quotes)

14 October 2023

Graphical Representation: On Insights II (The Complexity Perspective)

Graphical Representation
Graphical Representation Series

Scientists attempt to discover laws and principles, and for this they conduct experiments, build theories and models rooted in the data they collect. In the business setup, data professionals analyze the data for identifying patterns, trends, outliers or anything else that can lead to new information or knowledge. On one side scientists chose the boundaries of the systems they study, while for data professionals even if the systems are usually given, they can make similar choices. 

In theory, scientists are more flexible in what data they collect, though they might have constraints imposed by the boundaries of their experiments and the tools they use. For data professionals most of the data they need is already there, in the systems the business uses, though the constraints reside in the intrinsic and extrinsic quality of the data, whether the data are fit for the purpose. Both parties need to work around limitations, or attempt to improve the experiments, respectively the systems. 

Even if the data might have different characteristics, this doesn't mean that the methods applied by data professionals can't be used by scientists and vice-versa. The closer data professionals move from Data Analytics to Data Science, the higher the overlap between the business and scientific setup. 

Conversely, the problems data professionals meet have different characteristics. Scientists outlook is directed mainly at the phenomena and processes occurring in nature and society, where randomness, emergence and chaos seem to feel at home. Business processes deal more with predefined controlled structures, cyclicity, higher dependency between processes, feedback and delays. Even if the problems may seem to be different, they can be modeled with systems dynamics. 

Returning to data visualization and the problem of insight, there are multiple questions. Can we use simple designs or characterizations to find the answer to complex problems? Which must be the characteristics of a piece of information or knowledge to generate insight? How can a simple visualization generate an insight moment? 

Appealing to complexity theory, there are several general approaches in handling complexity. One approach resides in answering complexity with complexity. This means building complex data visualizations that attempt to model problem's complexity. For example, this could be done by building a complex model that reflects the problem studied, and build a set of complex visualizations that reflect the different important facets. Many data professionals advise against this approach as it goes against the simplicity principle. On the other hand, starting with something complex and removing the nonessential can prove to be an approachable strategy, even if it involves more effort. 

Another approach resides in reducing the complexity of the problem either by relaxing the constraints, or by breaking the problem into simple problems and addressing each one of them with visualizations. Relaxing the constraints allow studying upon case a more general problem or a linearization of the initial problem. Breaking down the problem into problems that can be easier solved, can help to better understand the general problem though we might lose the sight of emergence and other behavior that characterize complex systems.

Providing simple visualizations to complex problems implies a good understanding of the problem, its solution(s) and the overall context, which frankly is harder to achieve the more complex a problem is. For its understanding a problem requires a minimum of knowledge that needs to be reflected in the visualization(s). Even if some important aspects are assumed as known, they still need to be confirmed by the visualizations, otherwise any deviation from assumptions can lead to a new problem. Therefore, its questionable that simple visualizations can address the complexity of the problems in a general manner. 

Previous Post <<||>> Next Post 

03 October 2023

ERP Implementations: It’s a Matter of Complexity

 

ERP Implementation

There are many factors to blame for implementation process’ inefficiency, however many of the factors can be associated with the complexity of the project itself, respectively of the application(s) involved. The problem of complexity can be addressed by either answering to complexity with complexity, building a complex team to handle the tasks, which is seldom feasible even if many organizations do it, respectively by simplifying the implementation process and/or the application.

In what concerns the project, the complexity starts with requirement’s elicitation, the iterative transformations they suffer until the final functional requirements document is finalized, their evaluation and mapping to features, respectively gap’s identification. It’s a complex task because it involves understanding the business as well the functionality available in the target system(s). Then comes the effort estimation, which, as the name suggests, is just a guess based on available historical numbers and/or experts’ opinion. High-level requirements are easier to manage than low-level requirements, however they allow for more gaps in understanding. The more detailed the specifications, the more they should help in the estimation process, though that’s the theory. A considerable number of factors can impact the process.

Even if there are standard activities in the implementation process, the number of resources involved from the customer as well from the partner(s) side makes the whole planning process a nightmare for any Project Manager, no matter how experienced he/she is.

Ideally, each member of the team should behave like a trooper, knowing by instinct when and what needs to be done, which are the expectations, etc. This might be close to expectation on the partner side as the resources more likely participated in similar projects, though there’s always a mix between levels of expertise, resources migrating between projects. Unfortunately, that’s seldom (never) the case on the customer side as the gap between reality and expectation is considerable.

Each team member requires a minimum of information/knowledge so he/she can perform the activities assigned. Moreover, the volume of coordination and cooperation is considerably higher than in other projects, complexity that increases with organization’s size and is inverse proportional with organization’s maturity in managing projects and implementation-related activities. There’s thus a minimum of initial communication needed, and furthermore communication needs to occur between the parties involved. Moreover, the higher the lack of cohesion between the parties, the higher the need for communication and this applies especially when multiple organizations are involved in the project.

The triple constraint of Project Management between scope, cost, and time, respectively on quality has an important impact on the project. Resources need to be available when the project needs them and, especially on the partner side, only when they are needed. The implementation project to be feasible for the partner, its resources must work on several projects in parallel or the timing must be perfect, that no waiting times are involved, respectively the effort is concentrated only when needed. Such precision is possible maybe at project’s beginning, though the further the project evolves, the more challenging becomes the coordination of resources. Similar considerations apply to the customer as well.

Thus, a more realistic expectation is to have resources available only at certain points in time, and the resources should be capable of juggling between projects, respectively between project and other activities. Prioritizing is a must, and sometimes the operations or other projects have higher priority. When the time is not available, resources need to compromise by reducing the level of quality.

On the other side, it would be great if most of the effort could be concentrated at the beginning of the project, the later interactions being minimal.  

Previous <<||>> Next

21 March 2021

Strategic Management: The Impact of New Technologies III (Checking the Vital Signs)

Strategic Management

An organization which went through a major change, like the replacement of a strategic system (e.g. ERP/BI implementations), needs to go through a period of attentive supervision to address the inherent issues that ideally need to be handled as they arise, to minimize their future effects. Some organizations might even go through a convalescence period, which risks to prolong itself if the appropriate remedies aren’t found. Therefore, one needs an entity, who/which has the skills to recognize the symptoms, understand what’s happening and why, respectively of identifying the appropriate actions.

Given technologies’ multi-layered complexity and the volume of knowledge for understanding them, the role of the doctor can be seldom taken by one person. Moreover, the patient is an organization, each person in the organization having usually local knowledge about the patient. The needed knowledge is dispersed trough the organization, and one needs to tap into that knowledge, identify the people close to technologies and business area, respectively allow such people exchange information on a regular basis.

The people who should know the best the organization are in theory the management, however they are usually too far away from technologies and often too busy with management topics. IT professionals are close to technologies, though sometimes too far away from the patient. The users have a too narrow overview, while from logistical and economic reasons the number of people involved should be kept to a minimum. A compromise is to designate one person from each business area who works with any of the strategic systems, and assure that they have the technical and business knowledge required. It’s nothing but the key-user concept, though for it to work the key-users need not only knowledge but also the empowerment to act when the symptoms appear.

Big organizations have also a product owner for each application who supervises the application through its entire lifecycle, and who needs to coordinate with the IT, business and service providers. This is probably a good idea in order to assure that the ROI is reached over time, respectively that the needs of the system are considered within the IT operation context. In small organizations, the role can be taken by a technical or a business resource with deeper skills then the average user, usually a key-user. However, unless joined with the key-user role, the product owner’s focus will be the product and seldom the business themes.

The issues that need to be overcome after major changes are usually cross-functional, being imperative for people to work together and find solutions. Unfortunately, it’s also in human nature to wait until the issues are big enough to get the proper attention. Unless the key-users have the time allocated already for such topics, the issues will be lost in the heap of operational and tactical activities. This time must be allocated for all key-users and the technical resources needed to support them.

Some organizations build temporary working parties (groups of experts working together to achieve specific goals) or similar groups. However, the statute of such group needs to be permanent if the organization wants to continuously have its health in check, to build the needed expertize and awareness about occurred or potential issues. Centers of excellence/expertize (CoE) or competency centers (CC) are such working groups with permanent statute, having defined roles, responsibilities, and processes for supporting and promoting the effective use of technologies within the organization, respectively of monitoring and systematically addressing the risks and opportunities associated with them.

There’s also the null hypothesis, doing nothing, relying solely on employees’ professionalism, though without defined responsibility, accountability and empowerment, it can get messy.

Previous Post <<||>> Next Post

04 March 2021

Project Management: Projects' Dynamics II (Motion)

Project Management

Motion is the action or process of moving or being moved between an initial and a final or intermediate point. From the tinniest endeavors to the movement of the planets and beyond, everything is governed by motion. If the laws of nature seem to reveal an inner structural perfection, the activities people perform are quite often far from perfect, which is acceptable if we consider that (almost) everything is a learning process. What is probably less acceptable is the volume of inefficient motion we can easily categorize sometimes as waste.

The waste associated with motion can take many forms: sorting through a pile of tools to find the right one, searching for information, moving back and forth to reach a destination or achieve a goal, etc. Suboptimal motion can have important effects for an organization resulting in reduced productivity, respectively higher costs.

If for repetitive activities that involve a certain degree of similarity can be found typically a way to optimize the motion, the higher the uncertainty of the steps involved, the more difficult it becomes to optimize it. It’s the case of discovery endeavors in which the path between start and destination can’t be traced beforehand, respectively when the destination or path in between can’t be depicted to the needed level of detail. A strategy’s implementation, ERP implementations and other complex projects, especially the ones dealing with new technologies and/or incomplete knowledge, tend to be exploratory in nature and thus fall under this latter type a motion.

In other words, one must know at minimum the starting point, the destination, how to reach it and what it takes to reach it – resources, knowledge, skillset. When one has all this information one can go on and estimate how long it will take to reach the destination, though the estimate reflects the information available as well estimator’s skills in translating the information into a realistic roadmap. Each new information has the potential of impacting considerably the whole process, in extremis to the degree that one must start the journey anew. The complexity of such projects and the volume of uncertainty can make estimation difficult if not impossible, no matter how good estimators' skills are. At best an estimator can come with a best- and worst-case estimation, both however dependent on the assumptions made.

Moreover, complex projects are sensitive to the initial conditions or auspices under which they start. This sensitivity can turn a project in a totally different direction or pace, that can be reinforced positively or negatively as the project progresses. It’s a continuous interplay between internal and external factors and components that can create synergies or have adverse effects with the potential of reaching tipping points.

Related to the initial conditions, as the praxis sometimes shows, for entities found in continuous movement (like organizations) it’s also important to know from where one’s coming (and at what speed), as the previous impulse (driving force) can be further used or stirred as needed. Metaphorically, a project will need a certain time to find the right pace if it lacks the proper impulse.

Unless the team is trained to play and plays like an orchestra, the impact of deviations from expectations can be hardly quantified. To minimize the waste, ideally a project’s journey should minimally deviate from the optimal path, which can be challenging to achieve as a project’s mass can pull the project in one direction or the other. The more the project advances the bigger the mass, fact which can make a project unstoppable. When such high-mass projects are stopped, their impulse can continue to haunt the organization years after.

Previous Post <<||>> Next Post

29 September 2020

Strategic Management: Simplicity V (ERP Implementations' Story I)

Strategic Management

Probably ERP Implementations are one of the most complex type of projects one deals with in the IT world, however their complexity seldom resides in technologies themselves, but in the effort that needs to be made by organizations before, during and post-implementations. Through their transformative nature ERP implementations have the potential of changing the whole organization if their potential is exploited accordingly, which is unfortunately not always the case. Therefore, the challenges don’t resume only to managing a project or implementing a technology, but also in managing change, and that usually happens or needs to happen at several levels. 

Typically, the change is considered mainly at IT infrastructure and processual level, because at these levels most of the visible changes happen – that’s what steals the show. For the whole project duration is about replacing one or more legacy systems, making sure that the new infrastructure works as expected. The more an organization deviates from the standard the more effort is needed, and this effort can exhaust an organization’s resources to the degree that will need some time to recover after that, financially, but maybe more important from a vital point of view.

Even if the technological and processual layers are important, as they form the foundation on which an organization builds upon, besides the financial and material flow there are also the data, informational and knowledge flows, which seems to be neglected. Quite often that’s where the transformational potential resides. If an organization is not able to change positively these flows, on the long term the implementation will deal with problems people wished to be addressed much earlier, when the effort and effect would have met the lowest resistance, respectively the highest impact. 

An ERP implementation involves the migration of data between source(s) and target(s), the data requirements, including the one of appropriate quality, being regarded in respect to the target system(s). As within the data migration steps the data are extracted from the various sources, enriched, and prepared for import into the target system(s), there is the potential of bringing data quality to a level which would help the organization further. It’s probably simpler to imagine the process of taking the data from one place, cleaning and enriching the data to bring it to the needed form, and then putting the data into the new system. It’s a unique chance of improving data quality without touching the source or target system(s) while getting a considerable value.

Unfortunately, many organizations’ efforts to improve the quality of their data stop after the implementation. If there’s no focus and there are no structures in place to continue the effort, sooner or later data’s quality will decrease despite the earlier made efforts. Investing for example in a long-term data quality improvement or even a data management initiative might prove to be an exploratory and iterative process in which mistakes are maybe made, the direction might need to be changed, though, as long learning is involved, in this often resides the power of changing for the better.

When one talks about information there are two aspects to it: how an organization arrives from data to actionable information that reach timely the people who need it, respectively how information is further aggregated, recombined, shared, and harnessed into knowledge. These are the first three layers of knowledge (aka DIKW) pyramid, and an organization’s real success story is in how can manage these flows together, while increasing the value they provide for the organization. It’s an effort that must start with the implementation itself, or even earlier, and continue after the implementation, as an organization seems fit.

Previous Post <<||>> Next Post

Written: Sep-2020, Last Reviewed: Mar-2024

27 September 2020

Strategic Management: Simplicity IV (Designing for Simplicity)


More than two centuries ago, in his course on the importance of Style in Literature, George Lewes wisely remarked that 'the first obligation of Simplicity is that of using the simplest means to secure the fullest effect' [1]. This is probably the most important aspect the adopters of the KISS mantra seem to ignore – solutions need to be simple while covering all or most important aspects to assure the maximum benefit. The challenge for many resides in defining what the maximum benefit is about. This state of art is typically poorly understood, especially when people don’t understand what’s possible, respectively of what’s necessary to make things work smoothly. 

To make the simplicity principle work, one must envision the desired state of a product or solution and trace back what’s needed to achieve that vision. One can aim for the maximum or for the minimum possible, respectively for anything in between. That’s at least true in theory, in praxis there are constraints that limit the range of achievement, constraints ranging from the availability of resources, their maturity or the available time, respectively to the limits for growth - the learning capacity of individuals and organization as a whole. 

On the other side following the 80/20 principle, one could achieve in theory 80% of a working solution with 20% of the effort needed in achieving the full 100%. This principle comes with a trick too because one needs to focus on the important components or aspects of the solution for this to work. Otherwise, one is forced to do exploratory work in which the learning is gradually assimilated into the solution. This implies continuous feedback, respectively changing the targets as one progresses in multiple iterations. The approach is typically common to ERP implementations, BI and Data Management initiatives, or similar transformative projects which attempt changing an organization’s data, information, or knowledge flows - the backbones organizations are built upon.     

These two principles can be used together to shape an organization. While simplicity sets a target or compass for quality, the 80/20 principle provides the means of splitting the roadmap and effort into manageable targets while allowing to identify and prioritize the critical components, and they seldom resume only to technology. While technologies provide a potential for transformation, in the end is an organization’s setup that has the transformative role. 

For transformational synergies to happen, each person involved in the process must have a minimum of necessary skillset, knowledge and awareness of what’s required and how a solution can be harnessed. This minimum can be initially addressed through training and self-learning, however without certain mechanisms in place, the magic will not happen by itself. Change needs to be managed from within as part of an organization’s culture, by the people close to the flow, and when necessary, also from the outside, by the ones who can provide guiding direction. Ideally, a strategic approach is needed the vision, mission, goals, objectives, and roadmap are sketched, where intermediary targets are adequately mapped and pursued, and the progress is adequately tracked.

Thus, besides the technological components is needed to consider the required organizational components to support and manage change. These components form a structure which needs to adhere by design to the same principle of simplicity. According to Lewes, the 'simplicity of structure means organic unity' [1], which can imply harmony, robustness, variety, balance, economy or proportion. Without these qualities the structure of the resulting edifice can break under its own weight. Moreover, paraphrasing Eric Hoffer, simplicity marks the end of a continuous process of designing, building, and refining, while complexity marks a primitive stage.

Previous Post <<||>> Next Post

Written: Sep-2020, Last Reviewed: Mar-2024

References:
[1] George H Lewes (1865) "The Principles of Success in Literature"

Considered quotes:
"Simplicity of structure means organic unity, whether the organism be simple or complex; and hence in all times the emphasis which critics have laid upon Simplicity, though they have not unfrequently confounded it with narrowness of range." (George H Lewes, "The Principles of Success in Literature", 1865)
"The first obligation of Simplicity is that of using the simplest means to secure the fullest effect. But although the mind instinctively rejects all needless complexity, we shall greatly err if we fail to recognise the fact, that what the mind recoils from is not the complexity, but the needlessness." (George H Lewes, "The Principles of Success in Literature", 1865)
"In products of the human mind, simplicity marks the end of a process of refining, while complexity marks a primitive stage." (Eric Hoffer, 1954)

28 June 2020

Strategic Management: Simplicity II (A System's View)

Strategic Management

Each time one discusses in IT about software and hardware components interacting with each other, one talks about a composite referred to as a system. Even if the term Information System (IS) is related to it, a system is defined as a set of interrelated and interconnected components that can be considered together for specific purposes or simple convenience.

A component can be a piece of software or hardware, as well persons or groups if we extend the definition. The consideration of people becomes relevant especially in the context of ecologies, in which systems are placed in a broader context that considers people’s interaction with them, as this raises to important behavior that impacts system’s functioning.

Within a system each part has a role or function determined in respect to the whole as well as to the other parts. The role or function of the component is typically fixed, predefined, though there are also exceptions especially when the scope of a component is enlarged, respectively reduced to the degree that the component can be removed or ignored. What one considers or not considers as part of system defines a system’s boundaries; it’s what distinguishes it from other systems within the environment(s) considered.

The interaction between the components resumes in the exchange, transmission and processing of data found in different aggregations ranging from signals to complex data structures. If in non-IT-based systems the changes are determined by inflow, respectively outflow of energy, in IT the flow is considered in terms of data in its various aggregations (information, knowledge).  The data flow (also information flow) represents the ‘fluid’ that nourishes a system’s ‘organism’.

One can grasp the complexity in the moment one attempts to describe a system in terms of components, respectively the dependencies existing between them in term of data and processes. If in nature the processes are extrapolated, in IT they are predefined (even if the knowledge about them is not available). In addition, the less knowledge one has about the infrastructure, the higher the apparent complexity. Even if the system is not necessarily complex, the lack of knowledge and certainty about it makes it complex. The more one needs to dig for information and knowledge to get an acceptable level of knowledge and logical depth, the more time is needed for designing a solution.

Saint Exupéry’s definition of simplicity applies from a system’s functional point of view, though it doesn’t address the relative knowledge about the system, which often is implicit (in people’s heads). People have only fragmented knowledge about the system which makes it difficult to create the whole picture. It’s typically the role of system or process operational manuals, respectively of data descriptions, to make that knowledge explicit, also establishing a fundament for common knowledge and further communication and understanding.

Between the apparent (perceived) and real complexity of a system there’s an important gap that needs to be addressed if one wants to manage the systems adequately, respectively to simplify the systems. Often simplification happens when components or whole systems are replaced, consolidated, or migrated, a mix between these approaches existing as well. Simplifications at data level (aka data harmonization) or process level (aka process optimization and redesign) can have an important impact, being inherent to the good (optimal) functioning of systems.

Whether these changes occur in big-bang or gradual iterations it’s a question of available resources, organizational capabilities, including the ability to handle such projects, respectively the impact, opportunities and risks associated with such endeavors. Beyond this, it’s important to regard the problems from a systemic and systematic point of view, in which ecology’s role is important.

Previous Post <<||>> Next Post

Written: Jun-2020, Last Reviewed: Mar-2024

24 June 2020

Strategic Management: Simplicity I (Simple, but not that Simple)

Strategic Management
Strategic Management Series

Simplicity of design has been for centuries the wholly grail of architects, while software designers seem to situate themselves in opposition with the trend, as they aim using a mix of technologies that usually increase architecture’s complexity (sometimes the many, the newer and fancier, the better). Unfortunately, despite the implied but not necessarily reachable potential, each component added to an information system or infrastructure has the potential of increasing the overall complexity by a factor proportional to the degree of interactions it creates, respectively by the number of issues it creates or allows to propagate through these interactions.

Conversely, one talks about simplicity in IT without stating what is intended by it, and it can mean many things. Quite often the aim is packed within the ‘keep it simple stupid’ (aka KISS) mantra, a modern and pejorative alternative of Occam’s razor. KISS became a principle in software architecture design, and it can mean that a simple solution works better than a complex one, or that pursuing something in the simplest manner possible is usually better. The nuances are wide enough to cover a wide spectrum of solutions, arriving at statements that the simplest choice to make is the most appropriate one to make, thing that’s not necessarily true in IT, where complexity finds itself home.

Starting with the important number of technologies coexisting in integrations and ending with the exceptions existing in processes or the quality of data, things are almost never as simple as one may wish. An IT infrastructure’s complexity is dependent on the number of existing components, on whether they come from different generations or come from different vendor, on whether are deployed on different operating systems or are supported by different service providers, on the number of customizations made, on the degree of overlapping of the data and integrations needed to keep the data in synch, respectively of the differences existing in data models, quality and use. In general, the more variance, randomness, and challenges one has, the higher the overall complexity.

Paraphrasing Saint Exupéry, in IT simplicity is reached when there is no longer anything to add or anything to take away, or in Hans Hofmann’s words, simplicity is reflected in ‘the ability to simplify means to eliminate the unnecessary so that the necessary may speak’. This refers to the features, what a piece of software can do, respectively the functionality, how a certain outcome is reached, which arrive to be packed in various logical aggregations (function point, functional requirement, story, epic, model, product, etc.) or physical aggregations (classes, components, packages, services, models, etc.). These are the levels at which one needs to address simplicity adequately.

To make something simple one must be able either to design a solution up to the detail that there’s nothing to add or remove, or to start with something and remove or things to reach simplicity. Both approaches involve a considerable effort, time, and multiple iterations, however the first approach can easily become utopian as some architectures are so complex that sooner or later the second approach comes into play. Therefore, one needs in general to focus on what seems an optimal solution and optimize it continuously in further iterations. Aiming for perfection from the beginning or also later in the improvement process is a foolhardy wish.

Even if simplicity is hard to achieve, one can still talk about the elegance of a solution, scenarios in which the various components fit together like the pieces of a puzzle, or about robustness, reliability, correctness, maintainability, (re)usability, or learnability. These latter characteristics are known in Software Engineering as (software) quality attributes.

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.